title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 8
8.58M
|
---|---|---|---|---|
Three-Dimensional Choroidal Vessels Assessment in Diabetic Retinopathy | a0bc1450-29a8-46cd-a65a-15c44330423d | 11951060 | Cardiovascular System[mh] | Patient Selection In this retrospective, cross-sectional study, we analyzed the eyes of patients with DR and compared them with healthy controls matched for age and sex. The study was conducted at the Medical Retina and Vitreoretinal Surgery department at the University of Pittsburgh School of Medicine from July 2023 to June 2024. The study adhered to the Declaration of Helsinki guidelines, with a waiver of informed consent obtained owing to its retrospective nature. The participants in the study were individuals who had been diagnosed with either proliferative or nonproliferative DR, with or without DME. They were categorized based on laboratory data such as hemoglobin A1c and fasting blood sugar levels, as well as thorough fundus examinations conducted by an experienced retina specialist. We did not include participants with other eye conditions such as vitreoretinal diseases, uveitis, glaucoma, vascular occlusion, AMD, central serous chorioretinopathy, and high myopia. To ensure a more homogenous sample and reliable measurements, we included only patients with a spherical equivalent between −2.5 and 2.5. We also excluded those who had any eye surgeries except for uncomplicated cataract surgeries. Poor-quality OCT scans resulting from eye surface disorders, advanced cataracts, vitreous hemorrhage, opacities, severe sub–internal limiting membrane, or subhyaloid hemorrhage were also considered for exclusion. We performed a power analysis to determine the minimum sample size required to detect significant effects with a desired level of confidence. OCT Imaging Acquisition We used the Plex Elite 9000 system by Carl Zeiss Meditec (Dublin, CA, USA) to capture high-resolution images focused on the fovea. The system's expanded field swept-source OCT (SS-OCT) allowed for 12 × 12 mm scans at a 100-kHz acquisition rate. The device can perform scans at a speed of up to 200,000 A-scans per second, using a 1060-nm wavelength. It also has a tissue penetration depth of up to 6 mm and an axial resolution of approximately 6.3 µm. We assessed the scan quality using the SS-OCT software's built-in scoring system. Only scans with a score of 6 or higher out of 10, which is shown in green, were included in our analysis. These SS-OCT scans were exported as 8-bit volumes, each containing 1024 B-scans with a resolution of 1024 × 1536 pixels. After the acquisition, we reviewed the multimodal imaging data. A retina specialist categorized eyes with DR into proliferative and nonproliferative stages, as well as those with or without DME. Automated Choroidal Vessel Segmentation The methodology described combines automated and manual techniques to measure the 3D cross-sectional diameter of choroidal vessels. We used the same method as our recently published paper on AMD, and another recent publication on healthy eyes. We start by using a ResUnet, a type of deep learning architecture, to outline the boundaries of the choroid in structural SS-OCT scans. This process involves identifying the choroid inner boundary at the junction of the RPE and choriocapillaris, as well as the choroid outer boundary at the choroidal–scleral junction. Choroidal segmentation was done using a deep learning model and then smoothed volumetrically, with manual boundary correction applied to address any potential issues. – The deep learning model, detailed in our unpublished research, achieved a 93% accuracy in delineating choroidal boundaries, which improved to 100% after manual correction. The next step in our process was to separate the choroidal blood vessels from SS-OCT volumes. Because OCT image acquisition presents challenges like speckle noise, retinal shadows, contrast fluctuations, and misalignment of B-scans, as well as the complex architecture and intensity characteristics of choroidal blood vessels, we used the Phansalkar thresholding method to distinguish between luminal and stromal areas. – Traditional intensity-based thresholding techniques often struggle to segment vessels accurately owing to artifacts and complexities. To overcome this limitation, our group has developed the Phansalkar thresholding method, which dynamically calculates local thresholds within each 16 × 16 pixel tile across the B-scans, enabling clear differentiation between luminal and stromal areas. Subsequently, morphological postprocessing is applied to eliminate unnecessary elements, resulting in a seamlessly constructed 3D model of the choroidal vasculature. We have created two graphical user interfaces to make both automated and manual tasks easier in our proposed method. The first graphical user interface is specifically made for accurately extracting 3D choroidal vasculature from OCT data. It can handle raw SS-OCT volumes in .IMG or .JPG formats and assist in segmenting choroid boundaries and vessels. If needed, manual correction of choroidal segmentation is also possible. We used ImageJ 1.51 s (National Institutes of Health, Bethesda, MD, USA) to make a mask of the optic disc. This mask is then used to hide the blood vessels in the choroid at the optic disc locations. After this, we segment the choroid vessels and save the resulting 3D models of the choroidal vasculature for later manual measurement of cross-sectional diameters ( ). To measure the ChT, CVI, and cross-sectional vessel diameters, the datasets are imported into the second graphical user interface. This interface allows the grader to select any volume by identifying the center of the fovea in the RPE en face image, which corresponds with the center of the foveal avascular zone in the cross-sectional B-scan. A 12 × 12 grid is then applied over the 3D choroidal vasculature, highlighting different sectors: central, nasal, temporal, superior, and inferior. The central sector is defined by a 4-mm diameter circle ( ). Our algorithm was able to segment the entire choroid, but it had difficulty in defining and reconstructing the choriocapillaris vessels. As a result, the 3D vessel reconstruction was only possible for Sattler's and Haller's vessels. This study specifically examined the largest vessels (>100 microns) in each sector, all of which belonged to the Haller layer. , , Assessment of Choroidal Vessel Diameter Selecting a point on the 3D map brings up a window showing the vasculature in a small, fixed-size view around that point. The grader can rotate the vasculature to get the best view of the vessels for accurate measurements. The cross-sectional diameter was measured from the outermost visible portions of each vessel, with one measurement taken at the thickest part of each vessel. The average of these 15 measurements was then calculated to determine the mean choroidal vessel diameter (MChVD) for each sector ( ). To evaluate the intraclass correlation coefficient (ICC), two masked readers (E.S. and K.D.), who were unaware of the patients' details, performed measurements across all sectors for 10 eyes. A total of 750 measurements taken by the second reader (K.D.) were used to assess inter-reader reliability, and the measurements taken by the first reader (E.S.) were used for this study. The ChT and the CVI for the entire volume were determined using the ResUnet and Phansalkar thresholding methods. Statistical Analysis Data normality was assessed using the Shapiro–Wilk test, followed by parametric testing. The consistency between raters for image binarization was evaluated using the absolute agreement model of the ICC. The ICC values were interpreted as follows: (1) Less than 0.5 indicated poor reliability; (2) 0.5 to 0.75 suggested moderate reliability; (3) 0.75 to 0.9 denoted good reliability; and (4) values greater than 0.90 signified excellent reliability. For categorical data analysis, the χ 2 test was applied. Demographic data, ChT, CVI, and MChVD were compared across predefined groups using linear mixed models. These groups included DR vs. age- and sex-matched healthy patients, proliferative vs. nonproliferative DR, and eyes with vs. without DME, as outlined in the study design. Because these comparisons were planned primary analyses and not exploratory or post hoc tests, adjustments for multiple tests were not applied. In addition, linear mixed models were used in the statistical analysis to account for the correlation between the two eyes of each patient. Patient ID was set as a random effect, helping to account for the inherent correlation in our data. A P value of less than 0.05 was considered statistically significant. To address the issue of multiple comparisons, we applied Bonferroni correction. All statistical analyses were conducted using IBM's Statistical Package for Social Sciences (SPSS) version 26 (IBM, Inc, Armonk, NY, USA).
In this retrospective, cross-sectional study, we analyzed the eyes of patients with DR and compared them with healthy controls matched for age and sex. The study was conducted at the Medical Retina and Vitreoretinal Surgery department at the University of Pittsburgh School of Medicine from July 2023 to June 2024. The study adhered to the Declaration of Helsinki guidelines, with a waiver of informed consent obtained owing to its retrospective nature. The participants in the study were individuals who had been diagnosed with either proliferative or nonproliferative DR, with or without DME. They were categorized based on laboratory data such as hemoglobin A1c and fasting blood sugar levels, as well as thorough fundus examinations conducted by an experienced retina specialist. We did not include participants with other eye conditions such as vitreoretinal diseases, uveitis, glaucoma, vascular occlusion, AMD, central serous chorioretinopathy, and high myopia. To ensure a more homogenous sample and reliable measurements, we included only patients with a spherical equivalent between −2.5 and 2.5. We also excluded those who had any eye surgeries except for uncomplicated cataract surgeries. Poor-quality OCT scans resulting from eye surface disorders, advanced cataracts, vitreous hemorrhage, opacities, severe sub–internal limiting membrane, or subhyaloid hemorrhage were also considered for exclusion. We performed a power analysis to determine the minimum sample size required to detect significant effects with a desired level of confidence.
We used the Plex Elite 9000 system by Carl Zeiss Meditec (Dublin, CA, USA) to capture high-resolution images focused on the fovea. The system's expanded field swept-source OCT (SS-OCT) allowed for 12 × 12 mm scans at a 100-kHz acquisition rate. The device can perform scans at a speed of up to 200,000 A-scans per second, using a 1060-nm wavelength. It also has a tissue penetration depth of up to 6 mm and an axial resolution of approximately 6.3 µm. We assessed the scan quality using the SS-OCT software's built-in scoring system. Only scans with a score of 6 or higher out of 10, which is shown in green, were included in our analysis. These SS-OCT scans were exported as 8-bit volumes, each containing 1024 B-scans with a resolution of 1024 × 1536 pixels. After the acquisition, we reviewed the multimodal imaging data. A retina specialist categorized eyes with DR into proliferative and nonproliferative stages, as well as those with or without DME.
The methodology described combines automated and manual techniques to measure the 3D cross-sectional diameter of choroidal vessels. We used the same method as our recently published paper on AMD, and another recent publication on healthy eyes. We start by using a ResUnet, a type of deep learning architecture, to outline the boundaries of the choroid in structural SS-OCT scans. This process involves identifying the choroid inner boundary at the junction of the RPE and choriocapillaris, as well as the choroid outer boundary at the choroidal–scleral junction. Choroidal segmentation was done using a deep learning model and then smoothed volumetrically, with manual boundary correction applied to address any potential issues. – The deep learning model, detailed in our unpublished research, achieved a 93% accuracy in delineating choroidal boundaries, which improved to 100% after manual correction. The next step in our process was to separate the choroidal blood vessels from SS-OCT volumes. Because OCT image acquisition presents challenges like speckle noise, retinal shadows, contrast fluctuations, and misalignment of B-scans, as well as the complex architecture and intensity characteristics of choroidal blood vessels, we used the Phansalkar thresholding method to distinguish between luminal and stromal areas. – Traditional intensity-based thresholding techniques often struggle to segment vessels accurately owing to artifacts and complexities. To overcome this limitation, our group has developed the Phansalkar thresholding method, which dynamically calculates local thresholds within each 16 × 16 pixel tile across the B-scans, enabling clear differentiation between luminal and stromal areas. Subsequently, morphological postprocessing is applied to eliminate unnecessary elements, resulting in a seamlessly constructed 3D model of the choroidal vasculature. We have created two graphical user interfaces to make both automated and manual tasks easier in our proposed method. The first graphical user interface is specifically made for accurately extracting 3D choroidal vasculature from OCT data. It can handle raw SS-OCT volumes in .IMG or .JPG formats and assist in segmenting choroid boundaries and vessels. If needed, manual correction of choroidal segmentation is also possible. We used ImageJ 1.51 s (National Institutes of Health, Bethesda, MD, USA) to make a mask of the optic disc. This mask is then used to hide the blood vessels in the choroid at the optic disc locations. After this, we segment the choroid vessels and save the resulting 3D models of the choroidal vasculature for later manual measurement of cross-sectional diameters ( ). To measure the ChT, CVI, and cross-sectional vessel diameters, the datasets are imported into the second graphical user interface. This interface allows the grader to select any volume by identifying the center of the fovea in the RPE en face image, which corresponds with the center of the foveal avascular zone in the cross-sectional B-scan. A 12 × 12 grid is then applied over the 3D choroidal vasculature, highlighting different sectors: central, nasal, temporal, superior, and inferior. The central sector is defined by a 4-mm diameter circle ( ). Our algorithm was able to segment the entire choroid, but it had difficulty in defining and reconstructing the choriocapillaris vessels. As a result, the 3D vessel reconstruction was only possible for Sattler's and Haller's vessels. This study specifically examined the largest vessels (>100 microns) in each sector, all of which belonged to the Haller layer. , ,
Selecting a point on the 3D map brings up a window showing the vasculature in a small, fixed-size view around that point. The grader can rotate the vasculature to get the best view of the vessels for accurate measurements. The cross-sectional diameter was measured from the outermost visible portions of each vessel, with one measurement taken at the thickest part of each vessel. The average of these 15 measurements was then calculated to determine the mean choroidal vessel diameter (MChVD) for each sector ( ). To evaluate the intraclass correlation coefficient (ICC), two masked readers (E.S. and K.D.), who were unaware of the patients' details, performed measurements across all sectors for 10 eyes. A total of 750 measurements taken by the second reader (K.D.) were used to assess inter-reader reliability, and the measurements taken by the first reader (E.S.) were used for this study. The ChT and the CVI for the entire volume were determined using the ResUnet and Phansalkar thresholding methods.
Data normality was assessed using the Shapiro–Wilk test, followed by parametric testing. The consistency between raters for image binarization was evaluated using the absolute agreement model of the ICC. The ICC values were interpreted as follows: (1) Less than 0.5 indicated poor reliability; (2) 0.5 to 0.75 suggested moderate reliability; (3) 0.75 to 0.9 denoted good reliability; and (4) values greater than 0.90 signified excellent reliability. For categorical data analysis, the χ 2 test was applied. Demographic data, ChT, CVI, and MChVD were compared across predefined groups using linear mixed models. These groups included DR vs. age- and sex-matched healthy patients, proliferative vs. nonproliferative DR, and eyes with vs. without DME, as outlined in the study design. Because these comparisons were planned primary analyses and not exploratory or post hoc tests, adjustments for multiple tests were not applied. In addition, linear mixed models were used in the statistical analysis to account for the correlation between the two eyes of each patient. Patient ID was set as a random effect, helping to account for the inherent correlation in our data. A P value of less than 0.05 was considered statistically significant. To address the issue of multiple comparisons, we applied Bonferroni correction. All statistical analyses were conducted using IBM's Statistical Package for Social Sciences (SPSS) version 26 (IBM, Inc, Armonk, NY, USA).
Demographic Data In our analysis, we examined 100 eyes from 66 individuals. Among the 45 patients with DR, 28 individuals included both eyes, and 17 patients included only 1 eye because of a poor-quality scan of fixation loss in 13 eyes, vascular accident in 2 eyes, and anterior ischemic optic neuropathy in 2 eyes. Within the healthy group, 6 individuals included both eyes, and the remaining 15 included 1 eye each, because of low-quality OCT owing to cataract in 4 eyes and fixation loss in 11 eyes. A total of 73 eyes of 45 patients with DR and 27 eyes of 21 healthy, age and sex-matched controls were included in this study. The average age of the individuals was 60.50 ± 15.08 years, with 33 females (50.00%). There were no significant age differences observed between patients with DR and healthy subjects (61.22 ± 11.87 years vs. 59.00 ± 20.53 years; P = 0.582). There were no significant sex differences between the groups, with (21 females (46.66%) among patients with DR vs. 12 among the healthy individuals (57.14%); P = 0.944). Best-corrected visual acuity was significantly lower in DR eyes compared with controls (0.375 ± 0.433 logMAR vs. 0.017 ± 0.036 logMAR; P < 0.001). Among the 73 diabetic eyes analyzed, 36 were classified as proliferative, and 37 were nonproliferative. Additionally, 42 eyes exhibited DME, whereas 31 did not. Furthermore, 30 eyes were treated with panretinal photocoagulation, 29 eyes received intravitreal anti-VEGF injections, and 15 eyes were administered intravitreal dexamethasone implants ( ). Three-dimensional Assessment in DR Vs. Healthy Eyes An assessment of 10 eyes (150 choroidal vessels) for ICC between two masked readers for measurements of MChVD demonstrated a high level of agreement across all sectors, with an ICC value of 0.892 and a confidence interval ranging from 0.811 to 0.933. After applying Bonferroni correction, the P value of less than 0.0167 was statistically significant. Comparing eyes with DR to healthy controls revealed no significant difference in global or per-sector ChT (average ChT in DR, 217.688 ± 53.991 µm vs. average ChT in healthy, 216.934 ± 45.607 µm; P = 0.948). However, CVI was significantly decreased in DR (average CVI in DR, 0.375 ± 0.037 vs. average CVI in healthy, 0.394 ± 0.038; P = 0.029), particularly in the global, temporal, and central sectors ( P < 0.05). In evaluating MChVD, results showed a strong and significant reduction in DR eyes compared with healthy eyes, both globally (average MChVD in DR, 200.472 µm ± 28.246 µm vs. average MChVD in healthy eyes, 240.264 µm ± 22.350 µm; P < 0.001) and across all sectors ( P < 0.001) ( ). A representative example is shown in . Three-dimensional Assessment in Proliferative Vs. Nonproliferative DR The comparison between 36 eyes with proliferative DR and 43 eyes with nonproliferative DR indicated a nonstatistically significant reduced ChT in proliferative DR globally (average ChT in proliferative DR, 209.102 µm ± 223.679 µm vs. nonproliferative, 223.679 µm ± 61.356 µm; P = 0.259), as well as sectors ( P > 0.05). Choroidal thinning was statistically significant in the temporal sector in eyes with proliferative DR in comparison with nonproliferative (202.570 µm ± 44.577 µm vs. 225.080 ± 51.046 µm; P = 0.049). A decreased MChVD and CVI were seen globally and in each sector, which was not statistically significant ( P > 0.05) ( ). Three-dimensional Assessment in DR With DME Vs. DR Without DME The analysis between 42 eyes with DR and DME (including 21 proliferative DR and 21 nonproliferative DR) and 31 eyes with DR and without DME (including 15 proliferative DR and 16 nonproliferative DR) revealed that eyes with DME had a lower CVI globally (DR with DME, 0.365 ± 0.032 vs. DR without DME, 0.389 ± 0.040; P = 0.008). This difference was statistically significant, particularly in the temporal, inferior, and superior sectors ( P < 0.05). No significant differences were observed in ChT between these two subgroups. Additionally, MChVD showed a reduction in DME globally that was not statistically significant (DR with DME, 196.449 ± 27.221 µm vs. DR without DME, 205.922 ± 29.134 µm; P = 0.843) and across all sectors ( P > 0.05) ( ).
In our analysis, we examined 100 eyes from 66 individuals. Among the 45 patients with DR, 28 individuals included both eyes, and 17 patients included only 1 eye because of a poor-quality scan of fixation loss in 13 eyes, vascular accident in 2 eyes, and anterior ischemic optic neuropathy in 2 eyes. Within the healthy group, 6 individuals included both eyes, and the remaining 15 included 1 eye each, because of low-quality OCT owing to cataract in 4 eyes and fixation loss in 11 eyes. A total of 73 eyes of 45 patients with DR and 27 eyes of 21 healthy, age and sex-matched controls were included in this study. The average age of the individuals was 60.50 ± 15.08 years, with 33 females (50.00%). There were no significant age differences observed between patients with DR and healthy subjects (61.22 ± 11.87 years vs. 59.00 ± 20.53 years; P = 0.582). There were no significant sex differences between the groups, with (21 females (46.66%) among patients with DR vs. 12 among the healthy individuals (57.14%); P = 0.944). Best-corrected visual acuity was significantly lower in DR eyes compared with controls (0.375 ± 0.433 logMAR vs. 0.017 ± 0.036 logMAR; P < 0.001). Among the 73 diabetic eyes analyzed, 36 were classified as proliferative, and 37 were nonproliferative. Additionally, 42 eyes exhibited DME, whereas 31 did not. Furthermore, 30 eyes were treated with panretinal photocoagulation, 29 eyes received intravitreal anti-VEGF injections, and 15 eyes were administered intravitreal dexamethasone implants ( ).
An assessment of 10 eyes (150 choroidal vessels) for ICC between two masked readers for measurements of MChVD demonstrated a high level of agreement across all sectors, with an ICC value of 0.892 and a confidence interval ranging from 0.811 to 0.933. After applying Bonferroni correction, the P value of less than 0.0167 was statistically significant. Comparing eyes with DR to healthy controls revealed no significant difference in global or per-sector ChT (average ChT in DR, 217.688 ± 53.991 µm vs. average ChT in healthy, 216.934 ± 45.607 µm; P = 0.948). However, CVI was significantly decreased in DR (average CVI in DR, 0.375 ± 0.037 vs. average CVI in healthy, 0.394 ± 0.038; P = 0.029), particularly in the global, temporal, and central sectors ( P < 0.05). In evaluating MChVD, results showed a strong and significant reduction in DR eyes compared with healthy eyes, both globally (average MChVD in DR, 200.472 µm ± 28.246 µm vs. average MChVD in healthy eyes, 240.264 µm ± 22.350 µm; P < 0.001) and across all sectors ( P < 0.001) ( ). A representative example is shown in .
The comparison between 36 eyes with proliferative DR and 43 eyes with nonproliferative DR indicated a nonstatistically significant reduced ChT in proliferative DR globally (average ChT in proliferative DR, 209.102 µm ± 223.679 µm vs. nonproliferative, 223.679 µm ± 61.356 µm; P = 0.259), as well as sectors ( P > 0.05). Choroidal thinning was statistically significant in the temporal sector in eyes with proliferative DR in comparison with nonproliferative (202.570 µm ± 44.577 µm vs. 225.080 ± 51.046 µm; P = 0.049). A decreased MChVD and CVI were seen globally and in each sector, which was not statistically significant ( P > 0.05) ( ).
The analysis between 42 eyes with DR and DME (including 21 proliferative DR and 21 nonproliferative DR) and 31 eyes with DR and without DME (including 15 proliferative DR and 16 nonproliferative DR) revealed that eyes with DME had a lower CVI globally (DR with DME, 0.365 ± 0.032 vs. DR without DME, 0.389 ± 0.040; P = 0.008). This difference was statistically significant, particularly in the temporal, inferior, and superior sectors ( P < 0.05). No significant differences were observed in ChT between these two subgroups. Additionally, MChVD showed a reduction in DME globally that was not statistically significant (DR with DME, 196.449 ± 27.221 µm vs. DR without DME, 205.922 ± 29.134 µm; P = 0.843) and across all sectors ( P > 0.05) ( ).
Using a novel algorithm for the 3D assessment of choroidal vessels, we found that the MChVD was significantly lower in eyes with DR compared with healthy eyes (200.472 ± 28.246 µm vs. 240.264 ± 22.350 µm; P < 0.001). The MChVD in all the sectors significantly reduced in eyes with DR ( P < 0.001). Additionally, the CVI was reduced in eyes with DR, particularly in the global, temporal, and central sectors ( P < 0.05). Eyes with PDR demonstrated a nonsignificant decreased ChT (temporal sector, P < 0.05; other sectors, P > 0.05), decreased MChVD ( P > 0.05), and decreased CVI ( P > 0.05) compared with eyes with NPDR. Eyes with DME showed a nonsignificant decreased MChVD ( P > 0.05) and ChT ( P > 0.05), and significantly reduced CVI in the average, temporal, inferior, and superior sectors ( P < 0.05) compared with eyes without DME. This study used a novel validated semiautomated algorithm to create 3D visualizations of choroidal vessels. This new tool allowed us to measure the diameters of large choroidal vessels accurately within a 3D framework, which helped us to analyze the choroidal vasculature thoroughly in patients with DR. Previous studies on the choroid have mainly concentrated on 2D B-scans. , Given the variability of depth and 3D positioning, measurements obtained from 2D or en face imaging are inherently unreliable. Most of the previous studies focused on the vascular changes in the choriocapillaris based on the OCT angiography data – ; however, our findings globally relate to the attenuation of the choroidal medium and large vascular networks in the Sattler and Haller layers derived from structural data. Therefore, 3D analysis is crucial for a more accurate assessment of choroidal vessels, leading to a deeper understanding of disease pathophysiology. Previous studies have examined the relationship between ChT and the presence of DM, with or without DR, of varying severities, which exhibited inconsistent results. – A meta-analysis by Endo et al. that included 17 related studies found that subfoveal ChT was thinner in diabetic eyes without DR compared with healthy eyes. Another study by Xu et al. reported that patients with DM showed a slightly thicker choroid compared with healthy eyes. In the presence of DR, some studies indicated that a greater severity of DR showed a thinner ChT. – Wang et al. reported that the ChT increased in the early stages of DR and then decreased as DR progressed, but the presence of DME was not associated significantly with ChT. However, this association was not observed in the Beijing Eye Study, which was based on a population sample. They found that patients with DM had slightly increased ChT; however, DR in different stages did not affect the ChT. Kinoshita et al. found that the choroidal lumen and stroma may increase as DR progresses. We found no significant difference in ChT between eyes with DR and healthy eyes; additionally, we noted a nonsignificant reduction in the proliferative stage and eyes with DME. A decrease in the ChT leads to decreased blood flow in the choroid, compromising the supply of oxygen and nutrients to the retinal tissues. This process can contribute to ischemia, which may accelerate the progression of DR. This relationship highlights the importance of monitoring ChT in patients with DR. Different studies have shown the correlation of CVI with DM with or without DR. Keskin et al. observed that CVI is generally lower in patients with diabetes, with a more pronounced decrease in those with DR. They proposed that CVI could act as a sensitive and early indicator for the onset of DR. A negative correlation between CVI and the severity of DR was reported. – A study by Aksoy et al. on patients with type 1 DM without DR demonstrated that CVI could serve as an indicator of subclinical choroidal dysfunction in these patients. They reported no significant differences in ChT, total choroidal area, lumina, and stromal area between healthy patients and patients with diabetes. However, patients with type 1 DM exhibited significantly reduced CVI compared with healthy controls, with a negative correlation observed between CVI and disease duration. Our study found that the CVI was decreased significantly in eyes with DR compared with healthy eyes. We also observed a reduction in CVI in eyes with DME. Additionally, there was a nonsignificant reduced CVI in proliferative DR. We demonstrated that CVI is correlated with both the occurrence and severity of disease, suggesting its potential as a predictive biomarker in DR. Increasing choroidal stromal volume owing to inflammation or extracellular fluid accumulation and decreasing vessel volume owing to choroidal blood flow deficit along with vessel constriction may play a role in a decrease in the CVI during disease progression. During DR progression, choroidal vessel diameter may decrease owing to vascular constriction from choroidal hypoxia, with changes in blood flow occurring before retinopathy manifests. , A study conducted by Muir et al. investigated choroidal and retinal blood flow in Ins2Akita models using magnetic resonance imaging. The findings revealed that a deficit in choroidal blood flow was detected 5 months earlier than changes in retinal blood flow and decreases in visual acuity. This early decrease in choroidal blood flow may offer a way to assess the onset of early DR before significant damage or progression to proliferative retinopathy. Other studies indicated that choroidal volume and blood flow are significantly reduced in patients with proliferative DR, especially those with DME. Our study revealed a significant reduction in the MChVD in eyes with DR compared with healthy eyes. Additionally, we observed a nonsignificant reduced MChVD in eyes with DME. The application of this advanced 3D method for measuring the diameter of large choroidal vessels may serve as a novel biomarker for the detection and progression of DR. Our study had some limitations owing to its retrospective and cross-sectional design and limited sample size; a larger sample size could provide more detailed results. The cross-sectional approach also prevented us from tracking the progressive choroidal changes in patients with DR over time, which is important for a better understanding of the choroidal role in DR. Owing to the lack of delineation of choriocapillaris, 3D reconstruction was limited to Sattler's and Haller's vessels, focusing on the largest ones (>100 microns) in each sector. , , Without adjustment for axial length, angular units like arcminutes or degrees would be ideal, but we are unable to use these scales during postprocessing analysis. , Further research using 3D imaging is necessary to gain deeper insights into choroidal changes in DR, which our team aims to explore in future experiments.
This study found that in eyes affected by DR, the MChVD, ChT, and CVI exhibited changes associated with disease occurrence and progression. Specifically, CVI and MChVD were decreased in DR eyes compared with healthy controls. Additionally, eyes with DME displayed reduced CVI and MChVD. The use of 3D choroidal imaging offers a novel, noninvasive approach to examining choroidal vessel changes in DR and other ocular diseases. This method may significantly enhance our understanding of the underlying mechanisms driving pathogenesis. Future research could facilitate automated measurements of vessel diameters, vessel positioning, and other choroidal vascular features in 3D images at all stages of DR. These measurements hold potential as imaging markers for identifying patients at risk, early detection, and disease progression.
|
Expression de l'Annexine A1 dans la Néphropathie Lupique | 09849088-5629-4fb5-89d0-a439e27a9523 | 11770797 | Anatomy[mh] | The kidney is a vital organ that performs many crucial roles including filtering blood, removing wastes, and controlling the body's fluid balance . It harbors a variety of resident immune cells, which play an important role in the maintenance of the tissue homeostasis . Under physiologic conditions, endothelial, epithelial, and immune cells interact harmonically within the kidney. Once activated by external or by internal events, these cells produce inflammatory mediators leading to reduce the inflammation, to repair the tissue damage, and to restore the homeostasis that could trigger a regulatory response or initiate kidney disease . In addition, the kidney itself is very susceptible to immune-mediated diseases such as IgA nephropathy and membrano-proliferative glomerulonephritis . Thus, it could be targeted by pathogenic immune response against renal auto antigens and/or by local manifestations of systemic autoimmunity disease like lupus nephritis (LN) . LN is a frequent lupus complication (approximately 50% of lupus patients develop renal disease) ; it is associated with an increase of morbidity and mortality . The morphologic changes in renal biopsies from patients with LN comprise a spectrum of vascular, glomerular, and tubule-interstitial lesions. Thereby, various classifications were defined for LN according to the different morphologic pattern’s injury and their prognostic relevance . The classification of LN has evolved over the past 40 years and each class has different prognosis and treatment. Various processes such as apoptosis, necrosis and/or NETosis act abnormally in LN patients and can contribute to disease pathogenesis . Inefficient clearance and accumulation of apoptotic cells generate a chronic inflammatory response and may lead to the breakdown of self-tolerance . A panoply of mediators is implicated in the clearance of apoptotic cells and the resolution of the inflammatory reaction like Lipoxin A4, resolvins and Annexin A1 (AnxA1). AnxA1 is a glucocorticoid-regulated protein with an important role in the resolution of the inflammatory reaction . Indeed, AnxA1 regulates the immune cell migration to the inflammatory site, stimulates the neutrophils apoptosis in late stage of inflammation and induces the clearance of apoptotic cells by macrophages leading to tissue homeostasis . AnxA1 is highly expressed in lung and nasopharynx tissues while moderately expressed in kidney and skin tissues . In addition, it is more expressed by neutrophils and monocytes . AnxA1 levels were modulated in many diseases including glomerulonephritis. Patients with glomerular disorders including IgA nephropathy and diabetic nephropathy showed higher expression of AnxA1 in renal tissues compared to controls. AnxA1 expression was also evaluated in LN patients by immunohistochemistry and it was found to be elevated when comparing to controls and patients presenting glomerulonephritis with minimal change . Thereby, we aimed in the present research i) to explore the AnxA1 expression on renal biopsies of LN patients and control renal biopsies in the Tunisian population ii) to study the AnxA1 expression in the different classes of LN iii) to analyse the correlation of AnxA1 expression with clinical, serological and histological data of LN patients in the Tunisian population. Patients A total of 24 patients with LN were included in the study. Patients were followed up in the Nephrology and Internal Medicine Departments of the Hedi Chaker University Hospital of Sfax, and were diagnosed according to the International Society of Nephrology and Renal Pathology Society (ISN/RPS) classification into 6 classes (I, II, III, IV, V, VI) . If there was an association of lesions of class III or IV combined with lesions of class V, the biopsy was classified as III+V or IV+V. Patients with LN class VI (were more than 90% of glomeruli present global glomerulosclerosis) and were excluded from the study. Sections containing a number of glomeruli less than 5 in H&E staining were also excluded. Clinical, serological and histological data of patients were collected at the same time of the biopsy. Biopsies Paraffin-embedded LN renal biopsies, fixed in DuboscqBrasil, were collected from the Anatomopathological Department, Habib Bourguiba University Hospital, Sfax, Tunisia, for immunostaining. The biopsies were divided into 2 groups according to the proliferative status: - Severe proliferative group: including biopsies with Class III, IV, III+V and class IV+V: G1 - Non severe proliferative group: including biopsies with class II and V: G2 As controls, 8 paraffin-embedded renal tissues, fixed in Formalin, were obtained from the normal part of the nephrectomised kidney (secondary to renal carcinoma) and cadaver kidney (autopsy) from subjects without renal disease. Immunohistochemistry technique Staining procedure and preparation of tissue sections were performed as described in the study of Elloumi et al. For the AnxA1 detection, the anti-AnxA1 Abs [rabbit polyclonal raised against amino acids 1-100 of AnxA1 of human origin (Sigma Life Science; St. Louis, USA)] was used. For negative control preparation, section incubation was performed in the absence of the primary Ab. For positive control, we have used the interstitium infiltration with inflammatory cells as an internal positive control since AnxA1 is mainly expressed in inflammatory cells. Semi-quantitative analysis was performed by light microscopy and the interpretation was carried out by an anatomopathologist and a nephrologist. Photographic images of representative results were captured using a Zeiss® Axiocam color camera. Biopsy scoring We determined three scores for each sample; distribution score, intensity score and an expression score resulting from the product: intensity score X distribution score based on the strategy adapted in the study of Elloumi et al., 2017 . The Intensity Score ranged from 0 to 3: 0 for negative, 1 for weakly positive, 2 for moderately positive and 3 for strongly positive staining. The Distribution Score ranged from 0 to 4 depending on the stained surface of glomeruli: the score was 0 for 0%, 1 for 0% to 25%, 2 for 25% to 50%, 3 for 50% to 75%, and 4 for more than 75%. We determined also scores for fibrosis and infiltration with inflammatory cells in the interstitium of LN patients; 0 for absence, 1 for 0% to 25%, 2 for 25% to 50%, 3 for more than 50% of the interstitium. When comparing AnxA1 expression on renal biopsies between patients and controls, we took into consideration only distribution score, since biopsies of patients and controls were fixed in different products (formol for controls and Duboscq-Brasil for patients) which can influence the staining intensity scoring . Statistical analysis Results were analyzed using SPSS version 20; the nonparametric Mann-Whitney U-test to compare the AnxA1 expression between patients and controls in renal tissues and to compare the AnxA1 expression in LN patient groups. Mann-Whitney test was also used to study the association between AnxA1 expression and qualitative clinical, serological, and histological features of lupus. While, for correlation between AnxA1 expression scores and quantitative features of the disease, we used the Spearman correlation. A p-value less than 0.05 were considered as statistically significant. A total of 24 patients with LN were included in the study. Patients were followed up in the Nephrology and Internal Medicine Departments of the Hedi Chaker University Hospital of Sfax, and were diagnosed according to the International Society of Nephrology and Renal Pathology Society (ISN/RPS) classification into 6 classes (I, II, III, IV, V, VI) . If there was an association of lesions of class III or IV combined with lesions of class V, the biopsy was classified as III+V or IV+V. Patients with LN class VI (were more than 90% of glomeruli present global glomerulosclerosis) and were excluded from the study. Sections containing a number of glomeruli less than 5 in H&E staining were also excluded. Clinical, serological and histological data of patients were collected at the same time of the biopsy. Paraffin-embedded LN renal biopsies, fixed in DuboscqBrasil, were collected from the Anatomopathological Department, Habib Bourguiba University Hospital, Sfax, Tunisia, for immunostaining. The biopsies were divided into 2 groups according to the proliferative status: - Severe proliferative group: including biopsies with Class III, IV, III+V and class IV+V: G1 - Non severe proliferative group: including biopsies with class II and V: G2 As controls, 8 paraffin-embedded renal tissues, fixed in Formalin, were obtained from the normal part of the nephrectomised kidney (secondary to renal carcinoma) and cadaver kidney (autopsy) from subjects without renal disease. Staining procedure and preparation of tissue sections were performed as described in the study of Elloumi et al. For the AnxA1 detection, the anti-AnxA1 Abs [rabbit polyclonal raised against amino acids 1-100 of AnxA1 of human origin (Sigma Life Science; St. Louis, USA)] was used. For negative control preparation, section incubation was performed in the absence of the primary Ab. For positive control, we have used the interstitium infiltration with inflammatory cells as an internal positive control since AnxA1 is mainly expressed in inflammatory cells. Semi-quantitative analysis was performed by light microscopy and the interpretation was carried out by an anatomopathologist and a nephrologist. Photographic images of representative results were captured using a Zeiss® Axiocam color camera. We determined three scores for each sample; distribution score, intensity score and an expression score resulting from the product: intensity score X distribution score based on the strategy adapted in the study of Elloumi et al., 2017 . The Intensity Score ranged from 0 to 3: 0 for negative, 1 for weakly positive, 2 for moderately positive and 3 for strongly positive staining. The Distribution Score ranged from 0 to 4 depending on the stained surface of glomeruli: the score was 0 for 0%, 1 for 0% to 25%, 2 for 25% to 50%, 3 for 50% to 75%, and 4 for more than 75%. We determined also scores for fibrosis and infiltration with inflammatory cells in the interstitium of LN patients; 0 for absence, 1 for 0% to 25%, 2 for 25% to 50%, 3 for more than 50% of the interstitium. When comparing AnxA1 expression on renal biopsies between patients and controls, we took into consideration only distribution score, since biopsies of patients and controls were fixed in different products (formol for controls and Duboscq-Brasil for patients) which can influence the staining intensity scoring . Results were analyzed using SPSS version 20; the nonparametric Mann-Whitney U-test to compare the AnxA1 expression between patients and controls in renal tissues and to compare the AnxA1 expression in LN patient groups. Mann-Whitney test was also used to study the association between AnxA1 expression and qualitative clinical, serological, and histological features of lupus. While, for correlation between AnxA1 expression scores and quantitative features of the disease, we used the Spearman correlation. A p-value less than 0.05 were considered as statistically significant. Characteristics of the study subjects A total of 24 patients, from south Tunisia with sex ratio f/M=4/1, were included in the study. The median age of patients was 34 years (13-80 years). Lupus duration at the time of the biopsy was ranged from 1 to 14 years (median =5.5 years), while it was from 1 to 9 years (median=2)for nephropathy. All patients had proteinuria, 20% had hematuria. The LN were classified according to the ISN/ RPS classification; 2patients had class II, 6 class III, 4 class IV, 1 class V, 3 class III+V and 8 class IV+V. Histological and serological characteristics of patients, including immunoglobulin (Ig) deposits, different types of glomeruli infiltration and Abs detected in sera are summarized in Table 1 and Table 2. Renal Immunostaining Histology assessment of AnxA1 expression in renal tissues showed different scores in patient and control’s biopsies. The non-parametric Mann-Whitney test showed that AnxA1 was expressed in tubules and glomeruli of both patients and controls. While, AnxA1 glomerular distribution was higher in patient’s biopsies than in controls (p=0.00019), while, AnxA1 tubular distribution didn’t show any difference between patients and controls ( Fig.1 ). Within LN classes, AnxA1 intensity was higher in glomeruli of patients with classes III compared to patients with Class II and IV (p=0.050, p=0.023 respectively). The analysis of fibrosis and inflammatory cells infiltration scores didn’t show any significant differences between LN classes. When comparing AnxA1 expression between different LN groups, our results showed that AnxA1 intensity expression in glomeruli was significantly higher in severe proliferative group G1 than non severe proliferative group G2 (p=0.019). However, AnxA1 intensity expression in tubule was not different between the two groups ( Fig.2 ). Spearman correlation test showed a negative correlation between AnxA1 distribution and intensity in glomeruli with their correspondents in tubules ( Table 3 ). When studying correlation between AnxA1 expression in renal tissues and clinical, serological and histological presentation of patients ( Table 4 ), we found that in tubules, AnxA1 expression was lower in patients with anti-DNA Abs, anti-nucleosome Abs and CH50 low(hypocomplementemia). Whereas, we found a positive correlation between AnxA1 expression in tubules and infiltration of interstitium by inflammatory cells. A total of 24 patients, from south Tunisia with sex ratio f/M=4/1, were included in the study. The median age of patients was 34 years (13-80 years). Lupus duration at the time of the biopsy was ranged from 1 to 14 years (median =5.5 years), while it was from 1 to 9 years (median=2)for nephropathy. All patients had proteinuria, 20% had hematuria. The LN were classified according to the ISN/ RPS classification; 2patients had class II, 6 class III, 4 class IV, 1 class V, 3 class III+V and 8 class IV+V. Histological and serological characteristics of patients, including immunoglobulin (Ig) deposits, different types of glomeruli infiltration and Abs detected in sera are summarized in Table 1 and Table 2. Histology assessment of AnxA1 expression in renal tissues showed different scores in patient and control’s biopsies. The non-parametric Mann-Whitney test showed that AnxA1 was expressed in tubules and glomeruli of both patients and controls. While, AnxA1 glomerular distribution was higher in patient’s biopsies than in controls (p=0.00019), while, AnxA1 tubular distribution didn’t show any difference between patients and controls ( Fig.1 ). Within LN classes, AnxA1 intensity was higher in glomeruli of patients with classes III compared to patients with Class II and IV (p=0.050, p=0.023 respectively). The analysis of fibrosis and inflammatory cells infiltration scores didn’t show any significant differences between LN classes. When comparing AnxA1 expression between different LN groups, our results showed that AnxA1 intensity expression in glomeruli was significantly higher in severe proliferative group G1 than non severe proliferative group G2 (p=0.019). However, AnxA1 intensity expression in tubule was not different between the two groups ( Fig.2 ). Spearman correlation test showed a negative correlation between AnxA1 distribution and intensity in glomeruli with their correspondents in tubules ( Table 3 ). When studying correlation between AnxA1 expression in renal tissues and clinical, serological and histological presentation of patients ( Table 4 ), we found that in tubules, AnxA1 expression was lower in patients with anti-DNA Abs, anti-nucleosome Abs and CH50 low(hypocomplementemia). Whereas, we found a positive correlation between AnxA1 expression in tubules and infiltration of interstitium by inflammatory cells. AnxA1 is an endogenously produced anti-inflammatory protein which many studies have been devoted, last years, for its contribution in the development of human diseases, like type 2-diabete and pancreatic cancer . Many arguments are for the implication of AnxA1 in the physiopathology of LN. Indeed, in our previous casecontrol study of ANXA1 polymorphisms in systemic lupus erythematosus, we found the rs3739959>G of ANXA1 gene to be associated with LN susceptibility . Alice B et al had also found that high levels of anti-AnxA1 were associated with renal complications in lupus patients . To investigate this hypothesis, we characterized the AnxA1 expression in LN biopsies, by conducting an immunostaining of AnxA1 in renal tissues. Our results showed a higher expression of AnxA1 in glomeruli than in tubules of both patients and controls. This is in concordance with the HUMAN PROTEIN ATLAS site data which indicate that in healthy renal tissues, tubules weakly express the AnxA1, while glomeruli express higher levels of AnxA1, more than tubules. Besides, Shuk-Man Ka et al found that AnxA1 mRNA was weakly expressed in renal tubules of normal controls and in regenerating tubules in renal tissues of patients with different nephropathies . The expression of AnxA1 was differently modulated in many diseases depending on physiopathology of the disease. In human cancers, the AnxA1 expression was different from one type to another. It was lowly expressed in prostate cancer, while, highly expressed in human breast cancer compared to controls. Xiao-Feng. B et al,reported an overexpression of AnxA1 in pancreatic cancer and suggested the protein to be used as a biomarker for the diagnosis of this disease . A recent study reported an up-regulation of AnxA1 in the sera of type 1 diabetes patients . In the present study, we found a high distribution of AnxA1 in patients with different LN classes compared to controls. This could be explained by the fact that AnxA1 is released by apoptotic polymorphonuclear neutrophils and apoptotic mesangial cells during inflammatory reaction. AnxA1 expression was evaluated in some glomerular disorders, including IgA nephropathy, diabetic nephropathy and LN. Patients with glomerular disorders showed high levels of AnxA1 expression in renal tissues except those with minimal change disease (MCD) and controls who expressed very little AnxA1 in their glomeruli. These findings are in concordance with our results . The main conceptually new of this study is the comparison of AnxA1 expression between different classes of LN. Our results showed that patients with severe proliferative classes showed a higher expression of AnxA1 in their glomeruli compared to non severe proliferative classes. These results suggest the presence of a link between AnxA1 expression and the severity of LN, as was mentioned in previous studies. SM. Ka et al showed higher expression of AnxA1 in secondary nephropathy (diabetic nephropathy and LN) than primary proliferative nephropathy (IgA nephropathy and crescentic glomerulonephritis) in which AnxA1 expression was higher than in non-proliferative nephropathy (MCD, membranous glomerulonephritis and focal segmental glomerulosclerosis) . The difference of AnxA1 expression that we found could be explained by the mechanisms of LN physiopathology. In fact, renal injury in LN may result from auto-Abs binding to circulating antigens, or auto-Abs that bind to antigens deposited from the circulation in glomerular and vessel walls, causing in situ immune complex formation. Fc receptor and complement binding, then initiate an inflammatory and cytotoxic reaction . When this reaction is directed toward podocytes in the setting of membranous nephropathy, immune complex formation occurs along the subepithelial side of the glomerular basement membrane leading to membranous nephropathy corresponding to class V. Whereas, when a cytotoxic reaction is directed toward endocapillary cells, it leads to the endocapillary proliferative and exudative inflammatory reaction that follows subendothelial immune complex formation as seen in proliferative class III and IV . Usually, endocapillary proliferative lesions are associated with leukocyte accumulation especially monocytes and polynuclears . Since proliferative classes are characterized by polynuclear infiltration in addition to mesangial proliferation, we can suggest neutrophils, which express the higher level of AnxA1, as the source of the high intensity of AnxA1 in severe proliferative classes glomeruli compared to non severe proliferative classes. Feng Yu et al, explained, in a review published 2017, the different types of infiltration associated with renal injury in LN. Glomerular endocapillary and mesangial proliferation as well as infiltration of inflammatory cells were described and used for the differentiation of different LN classes. Whereas in tubules, the infiltration of lymphocytes between tubular epithelial cells was described and used. These two groups of immunity cells (neutrophils and lymphocytes) show different expressions of AnxA1 under the regulation of glucocorticoids. In fact, administration of glucocorticoids to healthy human volunteers leads to an increase in the levels of annexin A1 expression of circulating monocytes and neutrophils and to a decrease of AnxA1 expression by T-cells . Taking attention to these knowledges, we studied the correlation between corticosteroids treatment and the AnxA1 expression in LN patients. Our results showed that there is no significant correlation between AnxA1 expression in renal tissues and corticosteroids treatment, which indicates that AnxA1 expression depend on types of proliferation in renal diseases with no relation to the corticosteroids treatment. These interesting finding highlight the important role of AnxA1 expression in glomeruli in the LN physiopathology. However, no significant association was found with AnxA1 in glomeruli and clinical, serological and histological data of LN patients. On the contrary, the high AnxA1 expression in tubules was associated with anti-DNA and antinucleosome Abs presence and hypocomplementia CH50. In a previous study conducted in our research laboratory, both anti-nucleosome and anti-DNA Abs were suggested as useful markers of LN assessment and of disease activity . Basing on these finding, we suggest AnxA1expression on tubules to be associated with the LN severity Evidence of activation of apoptosis has been described in experimental models and human acute kidney injury . The intrinsic pathway of apoptosis is initiated by cell stress which results in the release of apoptogenic factors that interact to activate caspasa-9 while the extrinsic pathway leads to the activation of caspase-8. Caspase-9 or caspase-8 activates effector caspases like caspase-3 . Once the death cell pathways are activated, apoptotic tubular cells express “" eat-me” signals, such as KIM-1, to facilitate their identification by macrophages. Then, apoptotic cells are eliminated by adjacent cells before loss of cell membrane permeability . According to these knowledges, we suggest AnxA1 to be expressed by tubular cells as a mechanism of resolution of inflammation. In fact, AnxA1 was demonstrated to activate cell death pathway, in inflammatory conditions, overriding the prosurvival signals that cause prolonged lifespan of neutrophils. It activates the caspase-3 cleavage, the activation of Bax and inhibits the BAD phosphorylation . Scannell and his collaborators demonstrated that apoptotic neutrophils release AnxA1 to the outer plasma membrane, which acts on macrophages, promoting the efferocytosis; the elimination of apoptotic cells . In conclusion, our finding demonstrated that AnxA1 is more expressed on renal biopsies of LN patients compared to controls. Within LN patients, our results suggest that AnxA1 could be used to differentiate between severe proliferative and non severe proliferative classes. However, additional study is required to use this protein in the diagnosis of LN disease. |
Use the right words: evaluating the effect of word choice and word count on quality of narrative feedback in ophthalmology competency-based medical education assessments | 6707c7bc-2401-4e9d-8903-203ffec5e6ac | 11725001 | Ophthalmology[mh] | Narrative comments comprise a large part of assessment in Competency-Based Medical Education (CBME) and provide a record of faculty feedback and coaching directed towards the learner. While there is an extensive body of research that identifies narrative comments as an essential part of CBME, - few studies have explored what language contributes to quality in written assessments. As medical residency training programs transition from traditional time-based models to competency-based and hybrid models, - there is a growing need to understand how feedback delivery may be optimized. Our study uses a quantitative method of evaluating qualitative written narrative feedback; at present there are four validated tools to evaluate the quality of narrative comments in the context of CBME. - When thoughtfully composed, narrative feedback is a personalized commentary on resident performance. Effective feedback has been described as timely, specific, and actionable, with an emphasis on coaching behaviors versus high-stakes assessment. Understanding the ingredients that contribute to excellent quality feedback may help guide evaluators to refine their word choice and length of comment to be the most effective. Presented at the International Conference on Residency Education (ICRE) in 2015, Ross introduced five words/phrases commonly seen in high quality narrative feedback. More recently, Branfield Day et al. identified similar phrases in assessment comments that conveyed coaching language to foster learning. These phrases help frame strategies to assist residents in building their skills and knowledge; for example, beginning a sentence with “remember that…” was often followed by specific, actionable and detailed suggestions for improvement. Feedback that uses coaching language instead of generalized descriptions of the learning interaction is more effective, and signals recommendations for resident improvement. One of the commonly cited barriers to faculty participation in CBME is time; there is an ever-expanding amount of clinical, teaching, and academic duties for a teaching physician. , With competing interests and depleting resources, making feedback delivery efficient by using words/phrases with the most impact ensures that coaching quality does not suffer under these constraints. In addition, the relationship between the quantity of words applied to a comment and the quality of feedback is relatively unexplored in assessment. Those that have explored this relationship have found that longer written comments are correlated with better quality feedback. , , However, a “sweet-spot” of length that is not formulaic, but provides guidance to optimize quality, allows evaluators to be aware of an approximate length of comment before plateauing into “extra words” for the sake of length. In July 2017, Queen’s University, in Kingston, Ontario, Canada implemented CBME for all 28 postgraduate specialty training programs. As such, the Queen’s University DOO was the first ophthalmology program in Canada to be fully immersed in CBME and has assessment data of trainees over this time period. The purpose of this study was to investigate the relationship between coaching language, word count and the quality of written feedback in resident assessments. Inter-rater agreement for the total QuAL score was previously established as excellent. Ultimately, by guiding purposeful word choice and length of written feedback we hope to optimize the effectiveness and efficiency of feedback delivery in the context of CBME. Study design This retrospective cohort study was conducted at Queen’s University and was approved by the Queen’s University and Affiliated Hospitals Health Sciences Research Ethics Board (TRAQ 6029081). Ophthalmology resident assessment data from July 2017 to December 2020 were included. Sample size A total of 1997 assessments contained narrative comments, and were scored and analyzed. Study protocol Ophthalmology resident assessment data were retrieved from Elentra TM (Integrated Teaching and Learning Platform) and anonymized. The data were coded with unique identifiers and names were removed. The anonymized data were entered into an Excel sheet by a research assistant separate from the grading process. Written feedback was assigned a QuAL score. The QuAL score consists of three components. The first (Evidence) is a 4-level item that asks, “Does the rater provide sufficient evidence about resident performance?,” where zero indicates no comment at all, and three a full description. The second (Suggestion) and third (Connection) are binary, where zero indicates “no” and one indicates “yes” in response to the questions, “Does the rater provide a suggestion for improvement?” and “Is the rater’s suggestion linked to the behavior described.” All individual assessments were scored by an ophthalmology faculty member (SB), and a randomized sample of 10% was independently rescored by a final year ophthalmology resident (RC) to ensure inter-rater reliability. Both raters were blinded to any identifying information and graded independently of one another. The Intra-class correlation coefficient (ICCs) for the two graders was excellent at 0.90 (95% CI 0.88-0.92, p < 0.001). All QuAL scores were completed prior to the coaching word analysis; the two raters did not have specific knowledge of identified coaching words in the literature prior to scoring the narrative comments. Outcome measures The primary outcomes of our study were the associations between QuAL score and specific coaching language (“suggest,” “try(ing),” “because,” “consider,” “next step,” “continue,” and “next time”). These words/phrases were selected based on preliminary work by Ross (2015), with overlap from research conducted by Branfield Day. , “Continue” and “next time” were included as they were pre-existing prompts in the comments section of the evaluation forms. In addition, the words “discuss,” “recognize,” “demonstrate,” “remember,” “reflect,” and “practice” were chosen by our research group as language that was potentially associated with better quality feedback. Commonly used phrases generally perceived as components of poor quality feedback (“read,” “read more,” and “review”) were examined and were intended to represent negative controls. QuAL scores were assigned to each assessment prior to the identification of coaching words and negative control phrases. Data analysis Data were imported into IBM SPSS (Version 27.0, Armonk, NY, 2021) for statistical analysis. The correlation between the number of words and the QuAL score was explored using Spearman’s Rho. The correlation between the number of times each comment contained the specific words or phrases, and the QuAL score, was also assessed with the Spearman’s Rho. Independent samples t-tests were used to compare the mean QuAL scores. To supplement the initial Spearman correlation and provide more detail about the QuAL score at different levels, the word count was subdivided into groups of approximately 20% (10% categories after 55 words due to the large range up to 283) including 1-15, 16-30, 31-55, 56-80, and 81+. One-way ANOVA was used to examine the mean QuAL score for each of the five groups, with Tukey’s post hoc tests utilized to compare each category to all others. Differences were considered statistically significant if p < 0.05, and no adjustment was made for multiple comparisons. This retrospective cohort study was conducted at Queen’s University and was approved by the Queen’s University and Affiliated Hospitals Health Sciences Research Ethics Board (TRAQ 6029081). Ophthalmology resident assessment data from July 2017 to December 2020 were included. A total of 1997 assessments contained narrative comments, and were scored and analyzed. Ophthalmology resident assessment data were retrieved from Elentra TM (Integrated Teaching and Learning Platform) and anonymized. The data were coded with unique identifiers and names were removed. The anonymized data were entered into an Excel sheet by a research assistant separate from the grading process. Written feedback was assigned a QuAL score. The QuAL score consists of three components. The first (Evidence) is a 4-level item that asks, “Does the rater provide sufficient evidence about resident performance?,” where zero indicates no comment at all, and three a full description. The second (Suggestion) and third (Connection) are binary, where zero indicates “no” and one indicates “yes” in response to the questions, “Does the rater provide a suggestion for improvement?” and “Is the rater’s suggestion linked to the behavior described.” All individual assessments were scored by an ophthalmology faculty member (SB), and a randomized sample of 10% was independently rescored by a final year ophthalmology resident (RC) to ensure inter-rater reliability. Both raters were blinded to any identifying information and graded independently of one another. The Intra-class correlation coefficient (ICCs) for the two graders was excellent at 0.90 (95% CI 0.88-0.92, p < 0.001). All QuAL scores were completed prior to the coaching word analysis; the two raters did not have specific knowledge of identified coaching words in the literature prior to scoring the narrative comments. The primary outcomes of our study were the associations between QuAL score and specific coaching language (“suggest,” “try(ing),” “because,” “consider,” “next step,” “continue,” and “next time”). These words/phrases were selected based on preliminary work by Ross (2015), with overlap from research conducted by Branfield Day. , “Continue” and “next time” were included as they were pre-existing prompts in the comments section of the evaluation forms. In addition, the words “discuss,” “recognize,” “demonstrate,” “remember,” “reflect,” and “practice” were chosen by our research group as language that was potentially associated with better quality feedback. Commonly used phrases generally perceived as components of poor quality feedback (“read,” “read more,” and “review”) were examined and were intended to represent negative controls. QuAL scores were assigned to each assessment prior to the identification of coaching words and negative control phrases. Data were imported into IBM SPSS (Version 27.0, Armonk, NY, 2021) for statistical analysis. The correlation between the number of words and the QuAL score was explored using Spearman’s Rho. The correlation between the number of times each comment contained the specific words or phrases, and the QuAL score, was also assessed with the Spearman’s Rho. Independent samples t-tests were used to compare the mean QuAL scores. To supplement the initial Spearman correlation and provide more detail about the QuAL score at different levels, the word count was subdivided into groups of approximately 20% (10% categories after 55 words due to the large range up to 283) including 1-15, 16-30, 31-55, 56-80, and 81+. One-way ANOVA was used to examine the mean QuAL score for each of the five groups, with Tukey’s post hoc tests utilized to compare each category to all others. Differences were considered statistically significant if p < 0.05, and no adjustment was made for multiple comparisons. Assessments were collected from 20 different residents spanning postgraduate training years 1-5. The average QuAL score for all 1997 assessments was 3.07. Frequency of coaching word use The number of times that a coaching word was used within each comment ranged from zero to three. provides the number of times coaching words were used once or twice in each assessment. Correlation between total QuAL score and coaching word use The number of times a coaching word was used within a comment was significantly and positively associated with the total QuAL score for all coaching words. The strongest correlations were for the words/phrases “continue,” “try(ing),” and “next step.” The negative control words/phrases “read more” and “review” were negatively correlated with the QuAL score, see . The effect of coaching words on mean QuAL score As shown in , the mean value of the QuAL score increased when coaching words were present; this mean difference was significant for all words except for “next time” and “read.” Word count and QuAL score There was a significant correlation between the number of words used and the QuAL score, with a Spearman’s Rho value of 0.556 ( p < 0.001). The number of words included in feedback comments ranged from 0 (these 481 assessments were excluded from the analysis), to 283 words. The word count was subdivided into groups of 20% to determine the relationship between increasing word count and QuAL score as seen in and subdivided into 10% categories after 55 words. The one-way ANOVA and Tukey’s post-hoc tests indicated that each category represented a significant increase from the previous ( p < 0.001 for all), with the exception of the last two categories, 55-80 and 81+ ( p = 0.444). The number of times that a coaching word was used within each comment ranged from zero to three. provides the number of times coaching words were used once or twice in each assessment. The number of times a coaching word was used within a comment was significantly and positively associated with the total QuAL score for all coaching words. The strongest correlations were for the words/phrases “continue,” “try(ing),” and “next step.” The negative control words/phrases “read more” and “review” were negatively correlated with the QuAL score, see . As shown in , the mean value of the QuAL score increased when coaching words were present; this mean difference was significant for all words except for “next time” and “read.” There was a significant correlation between the number of words used and the QuAL score, with a Spearman’s Rho value of 0.556 ( p < 0.001). The number of words included in feedback comments ranged from 0 (these 481 assessments were excluded from the analysis), to 283 words. The word count was subdivided into groups of 20% to determine the relationship between increasing word count and QuAL score as seen in and subdivided into 10% categories after 55 words. The one-way ANOVA and Tukey’s post-hoc tests indicated that each category represented a significant increase from the previous ( p < 0.001 for all), with the exception of the last two categories, 55-80 and 81+ ( p = 0.444). As the first ophthalmology program in Canada to fully integrate CBME into the core of their residency training, this study offers a unique and early perspective to help inform program development. Our most compelling result is that when specific coaching words are used in narrative feedback, the QuAL score is consistently increased. This relationship was most notable for the words “next step,” “try(ing),” and “continue.” The phrases “next time,” “read,” “read more,” and “review” were unsurprisingly poorly or negatively correlated with the QuAL score. These generic phrases are non-specific and less helpful for targeted learner development. We suggest that coaching language be encouraged to help guide and frame narrative comments. At our center we have recently modified the structure of some of our assessment forms to include a list of suggested prompts to encourage the use of coaching phrases in the free text feedback boxes. Our analysis yielded a few surprising results. The phrases “next time” and “next steps” were infrequently used in our pool of narrative feedback; however, our forms use the phrases “next steps,” “next time,” and “continue” as prompts for the text-field, and we surmise that these words were underutilized in the body of the comments due to repetition. Predictably, there was a clear relationship initially between greater mean QuAL score and increasing word count. We had anticipated a plateau to this trend much earlier than demonstrated in our analysis; this was eventually seen, but not until after 80 words . This may seem discouraging that outstanding feedback quality can seemingly only be achieved with lengthy comments. However, we argue that with increased use of strategic coaching language, the length of comment can be shorter while achieving the same quality of feedback. In our analysis of early assessment data, the densest concentration of high achieving QuAL scores (4/5 and 5/5 grades) is not at the far end of the number of words spectrum; there are numerous succinct written comments achieving high QuAL scores in our dataset. Roberts et al. found that written feedback could be both succinct (on average less than 20 words per comment) and categorized as coaching feedback with recommendations for next steps. As the culture of assessment shifts and CBME becomes engrained in PGME across Canada, we have the opportunity to direct focus on optimizing narrative feedback to train not just competent, but excellent physicians. Limitations All assessments were from a single surgical subspecialty at a single center. Some comments may have been composed by the resident receiving the feedback; one option for assessment completion on Elentra TM allows both the resident and the assessor to contribute to the form before final submission. Although we believe that the majority of comments were not generated by the resident, it is impossible to deduce who wrote what components of the narrative feedback. Despite excellent inter-rater reliability, both graders were physicians, familiar with the clinical context of the feedback and department experts in CBME. All assessments were from a single surgical subspecialty at a single center. Some comments may have been composed by the resident receiving the feedback; one option for assessment completion on Elentra TM allows both the resident and the assessor to contribute to the form before final submission. Although we believe that the majority of comments were not generated by the resident, it is impossible to deduce who wrote what components of the narrative feedback. Despite excellent inter-rater reliability, both graders were physicians, familiar with the clinical context of the feedback and department experts in CBME. Using this QuAL score, we have shown that strategically used coaching words can enhance the quality of narrative feedback in assessments. Although increased word count is associated with a higher QuAL score, there is a demonstrated plateau to this relationship. |
Enabling uptake and sustainability of supervision roles by women GPs in Australia: a narrative analysis of interviews | 5325377c-2ac8-44d1-b313-c51fbb752967 | 9128131 | Family Medicine[mh] | Worldwide, the proportion of women taking up careers in medicine is increasing, but they are under-represented in leadership roles. Women now outnumber men in medical school graduations in most high- and some low- and middle-income countries . In Australia, women constituted 35% of pre-2000 medical graduates, rising to 53% for post-2000 graduates . Women General Practice (GP) supervisors are important role models for teaching and guiding clinical skills development for the next generation of women GPs . Facilitating the uptake and sustainability of supervision for women GPs is further considered important for educational diversity given that women GPs see a different caseload and practice medicine in different ways compared with men . Notwithstanding the clear benefits of women GP’s participation in supervision, there are concerns that women doctors may be less attracted to full time GP roles, along with the time demands of leadership positions and jobs that have unpredictable hours, potentially leaving gaps in the supervision workforce . A major issue for maintaining a high-quality supervision workforce is promoting conditions to enable the uptake and sustainability of supervision by women GPs. In Australia, GP supervisors oversee the safety and training of one or more registrars (trainee general practitioners) whilst concurrently managing individual and practice needs within a private fee-for-service business model. Many practices supervising registrars also host medical students and/or other learners . Within the general practice context, supervision might be viewed as risky because it encompasses person and context dependencies potentially outside the control of the supervisor. Meanwhile, women GP supervisors may have other non-professional responsibilities or interests which intersect with their professional choices . The development of a supervision career may vary between different women GPs relative to the opportunities, structure, and perceived challenges of supervision career pathways. In Australia, GP training is under major reform to support increased uptake in the general practice speciality, to bolster primary prevention and early intervention services, including for an ageing population . Annually, GP practices and supervisors across the nation host over 5000 registrars at different stages of their training to grow the future general practice workforce . GP supervisors guide the development of real-world general practice skills across the 3–4-year full time GP training cycle . This can be highly rewarding for supervisors who enjoy sharing their expertise and the process of investing in the next generation, however, the specific experience of women in relation to the uptake and sustainability of supervision roles has not been explored . In Australia, as in many other countries, formal (main, lead, or principal) supervisors must be accredited, undertake professional development, and oversee administration of learning and assessment tasks in the practice. Other GPs in the practice without formal accreditation as supervisors may also contribute to registrars’ on-the-job learning, but this may not be formally recognised. Training practices receive a weekly allowance for delivering structured teaching sessions to registrars (1.5–3 hours; ≤ $420 GPT1, ≤$210 GPT2) . This payment is made to the practice, does not reimburse clinical supervision (the oversight of learning on the job such as during consultations), and is generally directed to the supervisor providing the structured teaching session. This may or may not be passed onto each individual GP supervisor contributing to supervising registrars, depending on the decision of the business. Additionally, training practices receive a small weekly training practice subsidy that covers, amongst other things, lost earning for supervisors whilst they are teaching or observing registrars and are not seeing patients during in practice teaching (≤ $560 GPT1, ≤$280 GPT2) . Once again this may not be passed on to supervisors. In Australia, GP registrars are employees of the practice in which they train and conduct fee-for-service patient consultations. This contributes to practice income and helps to keep the teaching costs (lost billing time for supervising) tolerable for the business, including where high quality supervision takes time . Limited available research has explored the degree to which the current systems and processes around the supervision of GP registrars in private general practices align with the needs and interests of supervisors. One study identified that rural women GP supervisors were less likely to participate in supervision than were men, but this effect diminished once adjustments were made for total doctors employed in the practice, business relationship with the practice, and total hours worked per week. This suggests that women’s participation in supervision is likely to intersect with multi-level demographic and practice contextual influences across the lifespan . The broader literature denotes that there may be gender bias in medicine that leads to stereotypical responses to tasks and roles that can impact power, and economic and social prosperity . Women GPs may frame their work identity to conform to gendered expectations despite this playing out negatively for their financial and professional status . These issues may present challenges for supervision roles within general practice training. Research is needed to explore and understand the narratives of women GP supervisors, currently supervising or not, to better inform fair and inclusive environments for women GPs to become supervisors. This narrative inquiry aims to explore GP supervision in Australia from the perspective of women GPs to inform how to engage and sustain women GPs in supervision roles.
Study design Qualitative interviews were used to explore the perspectives and lived experiences of women GPs in Australia around supervision of GP registrars . Participants The research team received interest from 25 women GPs in Australia, from which 17 women were purposively selected to ensure representation across a range of practice roles, personal circumstances, and supervision experiences—currently supervising, previously supervising, or had never supervised, which are characteristics known to occur within a constant dynamic . Procedure Ethics approval to conduct the study was granted by the Monash University Human Research Ethics Committee (# 28848) on 28th May 2021. General Practice Supervisors Australia (GPSA)—Australia’s peak body advocating for GP supervisors—emailed an invitation to their membership list of around 5500 individuals, for women GPs to participate in the study. Members were requested to share the invitation with other women GPs who might be interested in participating. Potential respondents registered an expression of interest online which collected basic demographic and practice data. The interview schedule (see Additional file ) was piloted with the research team and five women GPs known to the research team, and refined to explore participant stories, including issues related to the uptake and sustainability of GP supervision roles. One-to-one semi-structured interviews were conducted online between July and September 2021. Each interviewee provided written consent to enrol in the study and was given an AUD $150 gift voucher in recognition of participation. Interviews were recorded and transcribed verbatim. The interviews were conducted by two non-GP health services researchers employed by GPSA, with the aim of exploring women’s experiences to inform the development of appropriate resources and policies. The interviewers had no preconceived notion of what the women’s stories might be, as there is no other research about this topic. Neither researcher had been a GP supervisor, and they did not know any of the participants. There was no specific gender lens applied to each interview due to the intersectionality of gender with other forms of inequality and exclusion . During the interviews, the interview schedule was used as a guide, with extemporaneous evolution allowing the women GP interviewees to discuss and expand their own narratives without interruption. The interviews ceased at the discretion of the interviewees, when they had nothing more that they wanted to add. Data analysis Narrative analysis was used to elucidate participant perspectives, allowing different stories to emerge and be arranged for their meaning to inform the topic. After each interview, reflective notes were recorded to develop initial impressions of the stories of each woman, and these were discussed by the research team to inform deeper reflection. The researchers fully immersed themselves in the data by reading and re-reading the transcripts over the course of 3 months and discussing the main stories that were emerging. The researchers sought to represent the legitimate meaning of the women’s stories as part of the narrative analytical process and reduce any subjective biases . Data analysis focused on re-storying women GPs’ experiences of registrar supervision through creation of story arcs, which reflected the everyday practical experiences of each participant . Key elements from each narrative were identified comprising characters; setting; problems; action; and resolution (see Additional file ). These were then arranged in chronological order . The interpretation of each re-storied narrative was repeatedly discussed amongst the research team. The temporal unity and complexity of the data were protected to relate the lived experience of women GP supervisors and reflect on the capacity for women GPs to take up and sustain registrar supervision. Where counter-narratives emerged, these were also documented to provide for richer interpretation.
Qualitative interviews were used to explore the perspectives and lived experiences of women GPs in Australia around supervision of GP registrars .
The research team received interest from 25 women GPs in Australia, from which 17 women were purposively selected to ensure representation across a range of practice roles, personal circumstances, and supervision experiences—currently supervising, previously supervising, or had never supervised, which are characteristics known to occur within a constant dynamic .
Ethics approval to conduct the study was granted by the Monash University Human Research Ethics Committee (# 28848) on 28th May 2021. General Practice Supervisors Australia (GPSA)—Australia’s peak body advocating for GP supervisors—emailed an invitation to their membership list of around 5500 individuals, for women GPs to participate in the study. Members were requested to share the invitation with other women GPs who might be interested in participating. Potential respondents registered an expression of interest online which collected basic demographic and practice data. The interview schedule (see Additional file ) was piloted with the research team and five women GPs known to the research team, and refined to explore participant stories, including issues related to the uptake and sustainability of GP supervision roles. One-to-one semi-structured interviews were conducted online between July and September 2021. Each interviewee provided written consent to enrol in the study and was given an AUD $150 gift voucher in recognition of participation. Interviews were recorded and transcribed verbatim. The interviews were conducted by two non-GP health services researchers employed by GPSA, with the aim of exploring women’s experiences to inform the development of appropriate resources and policies. The interviewers had no preconceived notion of what the women’s stories might be, as there is no other research about this topic. Neither researcher had been a GP supervisor, and they did not know any of the participants. There was no specific gender lens applied to each interview due to the intersectionality of gender with other forms of inequality and exclusion . During the interviews, the interview schedule was used as a guide, with extemporaneous evolution allowing the women GP interviewees to discuss and expand their own narratives without interruption. The interviews ceased at the discretion of the interviewees, when they had nothing more that they wanted to add.
Narrative analysis was used to elucidate participant perspectives, allowing different stories to emerge and be arranged for their meaning to inform the topic. After each interview, reflective notes were recorded to develop initial impressions of the stories of each woman, and these were discussed by the research team to inform deeper reflection. The researchers fully immersed themselves in the data by reading and re-reading the transcripts over the course of 3 months and discussing the main stories that were emerging. The researchers sought to represent the legitimate meaning of the women’s stories as part of the narrative analytical process and reduce any subjective biases . Data analysis focused on re-storying women GPs’ experiences of registrar supervision through creation of story arcs, which reflected the everyday practical experiences of each participant . Key elements from each narrative were identified comprising characters; setting; problems; action; and resolution (see Additional file ). These were then arranged in chronological order . The interpretation of each re-storied narrative was repeatedly discussed amongst the research team. The temporal unity and complexity of the data were protected to relate the lived experience of women GP supervisors and reflect on the capacity for women GPs to take up and sustain registrar supervision. Where counter-narratives emerged, these were also documented to provide for richer interpretation.
Of the 25 women who completed expressions of interest for the study, 17 participated in interviews representing almost 17 hours of recorded data. The sociodemographic characteristics of the sample are presented in Table . In summary, a range of characteristics were represented, although most respondents were aged under 45 years, were partnered, had children or were expecting, and were currently supervising. There was a similar level of representation across those working part and full time and working in a range of practice sizes. Six intersecting story arcs emerged from the qualitative interview data which were about power and control, pay, time, other life commitments, quality of supervision, and supervisor identity. These are summarised in Table . Narrative analysis A description of the narrative for each story arc is presented with exemplars below. Power and control Several women GPs working as non-practice owners described having been asked to take on supervision without being fully informed, and at times being misinformed, about the role. This pattern had the potential to repeat as women GPs moved between practices. I fell into it, so it actually happened when I was working in [regional centre], and the practice manager just handed a form and said, "Can you sign this, because we need an extra person to supervise?" And I think I did say something like, "So long as you're not expecting me to actually do anything," she said, "No, we're not," but of course they were. So that's how I sort of fell into it, and then…, when I changed jobs, one of the things they said [in the new practice] is they need a supervisor, which was fine. I didn't realize that they actually needed a primary supervisor, I didn't realize that that's where that was heading. So, that's how I got into GP supervision. [ID1] One woman GP, working in the same practice for over 20 years as a non-practice owner, related a similar story of being nominated as an official supervisor on formal paperwork without her consent: I actually didn’t put down my name to actually be an official … supervisor. Although I was teaching, I was doing it in an unofficial manner. But my boss [practice owner] took it on himself with his wife to forge my signature to say that I was going to be prepared to be doing this teaching. One day, three registrars arrived. [ID8] She found this frustrating because her efforts to accommodate the situation were not acknowledged: …I had my nose a little bit out of joint, because [practice owner/boss] didn't pay me anything, I didn't get any thanks. It was just sort of assumed, okay, well now you've agreed to it, goodbye. [ID8] When asked to supervise again later at the same practice, she sought more control over the process including asking for payment - “I’m going to actually speak to [practice owner/boss]. And so I asked him to pay me…” [ID8] - but this resulted in her being excluded from ongoing supervision opportunities: …when the next one came without telling me I suddenly did not become the supervisor and I haven't been the supervisor since. [ID8] She subsequently only supervised informally, disjointed from the formal supervision team but contributing in a way that she had control over. Women GP supervisors related doing a substantial proportion of informal supervision without recognition or authority: I would be sitting in a room with a junior registrar, the GPT1 in the next room on their first two weeks, and on the weekends, because I worked a lot of Saturdays. And there'd be no recognition. There wasn't even a thank-you for doing it. And that didn't make me want to give it up, but it made me really [upset]. [ID2] Several women GPs also told of having a lack of power in relation to overseeing male registrars: I wondered sometimes if he just culturally struggled with having female supervisors. I often felt like he didn't listen or take things in as much from me as he might've from my male colleague. [ID5] Another woman GP supervisor sought help when she felt that a male registrar was not receptive to her feedback and advice, but the situation remained unresolved due to a perceived lack of empathy from male superiors: I remember telling … [one of my] superiors about it... I was trying to explain to him the trouble I was having with this registrar, and how he wasn't receiving feedback. But this other, this contact person, was also a male GP, and I don't think he really fully understood that. He kind of just brushed it off. [ID11] The counter-narrative about lack of control and power for women GP supervisors was seen to develop where women GP supervisors who were non-practice owners led all aspects of supervision for their practices and were acknowledged for doing so. These women had strong agency and enjoyed building a supervision model: I just did it all... everything, including all the admin of it and getting all the registrars and all of the supervising of the registrar. So it was the only way I could get it running…if I just did it… my purpose with that was about making [supervision of registrars] sustainable. [ID9] However, there was not practice support to sustain the teaching and learning model this GP had planned: they just didn't want to be involved. [ID9] Pay Some women GPs who were working as non-practice owners had not questioned why they had never been remunerated for supervising. When other women GP colleagues alerted them to the availability of funding for teaching, they approached male practice owners to talk about getting paid for the teaching aspect of supervision work: I didn't even know that practices or supervisors were being remunerated… one other colleague [senior female GP] said, "Oh, don't you know that they do get paid?" I was like, "Oh, no idea," they're like, "You should ask."[ID6] Among others, one mid-career woman GP working as a main supervisor and a non-practice owner, approached the practice manager about not getting paid: I just wasn't getting any answer. And I was going on more and more teaching… I phoned up the owner, just to try to talk to him directly, because I wasn't getting any sense from the manager, and the owner said, "I've got to make an income, those payments are mine. You should do the training in your own time." [ID4] She was unsuccessful in the negotiations and ended up leaving the practice feeling unappreciated: … one of my colleagues said, "… I think this is time for us to leave." And so, both of us, we decided to move to a practice that appreciated teaching and that appreciated we weren't just magically becoming GPs and there was a point of teaching. [ID4] Other mid-career women GP supervisors, working as non-practice owners, added to the story that, over their careers, they had become more willing to advocate for remuneration of their teaching in supervision work or relinquish it: I think when all you do is work in general practice and you're not remunerated for your contributions for teaching, but your income drops because you're providing supervision and teaching… I wouldn't accept that now, whereas in the past I just accepted that as the way it was…now I'm old enough and cranky enough, I just say, "No, sorry, can't help you." [ID10] Countering this narrative, some women GPs who led teaching and learning in their practice as practice owners or educational leaders (working on a contract) recounted that they ensured supervision payments were distributed to the GPs who were teaching: ...it has been the practice principals who have been doing the teaching most of the time, but if we have had someone, one of the other contractors [non-practice owner] doctors doing a teaching session for us, we have always given them the equivalent share of the teaching payment. We feel like that's only fair. [ID5] However, even for those receiving payment for supervising, the payments were considered small relative to earnings that women GPs could make from billings: ...when I spent the morning sitting in with my registrar yesterday, I sort of got my normal supervision payments but essentially that is nothing compared to if I was seeing my own patients, because the registrar's done all the billing for that… if it wasn't a financial disincentive, that would be nice. [ID17] One early-career GP noted pay was a necessity given the time she spent on supervision: I think the income support actually was a big factor…I perhaps was a bit naive about how much time it would take, especially because I want to do it properly. [ID13] Pay was also an issue for a mid-career woman GP, particularly in relation to choosing work which did not disadvantage her for working part-time and having had more breaks in practice: I did want to get paid for what I was doing. I think that the finances do come into it; I think it's got to be adequately remunerated… women tend to work less hours and have time off, so that might be more of an issue than for a man possibly. [ID8] Women GPs who were single and/or co-parenting also reflected that the pay is important as part of a portfolio of income that they relied upon to live; and if the pay for supervision was inadequate it added to the reasons to cease supervision: [leaving supervision] …it was just a combination of … income pressures, and also a registrar that's just not appreciative …. I just decided, "No, I can't do it. I just have to look after myself." [ID4] Time Many women GPs identified with being approachable and available as supervisors, but noted that this increased the time they dedicated to supporting registrar learning and well-being, whether or not they were the main supervisor: They know that they can call me at any time. I have no hesitation, I'd rather them call me than not. [ID3] One woman GP noticed that a group of her early-career women colleagues were frequently used for informal support because the main male supervisor was not considered approachable: …the [official] supervisor was male, and then there were at least three of us that were female …the interesting comment was that "Oh yeah, you guys are more approachable, you guys are more accessible. It's easier for me to come and ask you questions and ask for some advice as opposed to going to my supervisor." [ID6] They tried to resolve some of the boundary issues with the registrar and ensure the main supervisor was accountable to his role. However, the situation was not resolved, and this eroded the women GPs’ time with their own patients whilst they supported the needs of the registrar. This caused frustration: ...it's typically not part of our job description … We don't mind, helping out here on the odd occasion, but if suddenly someone is calling you 5, 6, 7 times in a session … It does take out time from your own practice. [ID6] Women GP supervisors also discussed being frequently called upon for advice on issues for which they were perceived as experts and these could be the more time consuming cases: …They don’t have 6 problems, they have like 20, and there are like 35 tablets. Apparently, I’m the juggler of that. [ID16] Women GPs described fitting supervision around a broader scope of medicine they practised: … our consults can be more complex and time-consuming compared to some of my male colleagues... It can be hard when you're talking to someone about a mental health issue…then you're kind of interrupted to talk to a registrar to do teaching. It can get quite awkward. [ID15] The time that women GPs dedicated to supervision increased if registrars were junior, unsafe, and/or under-performing: ...we haven't had a [first or second 6-month term registrar]… for a while and they're always the ones that are very time-consuming … they ring you a lot, then that's exhausting. [ID5] This scenario led some women GP supervisors to consider giving up supervision work: We've certainly had in the last couple of years, a few... a couple of tricky registrars, and this practice has been taking registrars for probably 25 years. And the staff are exhausted [ID5] When women took a break from supervising, they felt relief: It gives the time to do other stuff... I'll sit down and do some other things as well, rather than having to worry about where the learning needs are.... more time to just do my own learning. [ID5] Other life commitments Women GPs reflected on both unpredictable or fixed commitments in their personal lives and how supervising over a fixed learning term would align with this: Outside my personal things and my work… knowing that if you're going to commit for an official thing [supervising], you're committing for six months. And so, again, I would hate to take on a role and then say, after three months, “oh, sorry, I can't do that.” [ID8] Other women GPs yet to start families anticipated major breaks in practice when doing this, which could disrupt supervision momentum: For me obviously, now I'm going to have a kid. Then go out, search what I need to... Work out what I need to do to fill out the paperwork, go to the training days. Those kinds of things I think will make it harder for me, definitely… [ID7] There was a sense of unpredictability related to raising young children which could make it harder to supervise: ...that is a significant thing to think about, having children. And it is something that I don't think male …GPs and male supervisors have to really think about as much…the hard part is the unpredictability of it. [ID11] Primary school-aged children were noted to impose scheduled demands which were difficult for women GPs to manage around supervision tasks, including completing paperwork: I've been more conscious since having a family... You've got more deadlines. You've got to finish on time, to get home for children. Previously, it didn't matter if I worked late or stayed late to do paperwork. But it matters now. [ID12] However, the capacity to juggle children around supervision roles improved if women were in a supportive and flexible practice and the childcare supports were nearby: ...the practice has always been really family-friendly, …really good at juggling. My babies always came to tutes with me and stuff like that, it was never an issue for clinical meetings. [ID17] However, this supervisor still reported that supervision took an additional effort: even if we have someone at home, juggling kind of the family life and that sort of thing, and needing to be home for kids or go to other things, sometimes it’s just an extra thing to do. [ID17]. Quality of supervision Women GPs related stories which showed they were intrinsically motivated to provide quality teaching and learning to create a positive experience for registrars. Correspondingly, they were reticent to get involved in supervision unless they could meet their personal standards, overlapping with the theme about their other life commitments: I don't know that I would want to go into being a supervisor if I can't do it properly. If can't do it properly, can't do it well, then what's the point? [ID7] Women GPs related a belief that best practice meant going above and beyond the minimum requirements by developing and nurturing the supervisor-registrar alliance: I probably do a bit more than what is prescribed, I guess, [or] expected. And a part of it is because I think it's really like I value registrars and I think that they should get a really good experience, and part of it is for me as well: because I want to make sure that I'm comfortable. [ID3] Women GPs aimed to build skills and find the right practice culture to foster their capacity to supervise to a high standard: ...if I was in the right environment, yeah, I would consider being a supervisor…[and] do the training. [ID7] However, women GPs’ stories indicated a lack of specific educational support and guidance to enhance their understanding and benchmarking of the supervision role, which led these women to search for information themselves: I don't remember doing any specific education on how to be a supervisor. I had to figure out a lot of stuff myself or by asking other people. [ID7] This imposed a level of responsibility for women GPs commencing supervision such that others may not know about the process, or they don’t know what’s involved, and they might feel this sense of burden or responsibility because they don’t know to what level they’ll have to have to start supervision. [ID15]. Women GPs actively pursued ways to develop their supervision skills by reflecting on their own experience as a supervisor and through teaching medical students, or benchmarking from being involved in external clinical teaching visits (ECTVs): I feel like I've built up more skills over time, with teaching and training and learning and feedback. [ID12] I want to be involved…to do ECTVs, to make sure that I... build my skills in supporting registrars. [ID13] Women GPs identified the need for backup support for registrars, showing preference for working as part of a team of supervisors: … it's nice to have other colleagues there... if I'm not there, there's two other people…[so] I know that the registrar's not alone…. It's also nice to have another supervisor or two to discuss feedback and your thoughts… just to bounce things off others. [ID3] She also sought wider opportunities to exchange ideas with GP supervisors from other practices. Supervisor identity Women GPs related stories of having imposter syndrome when they started supervising. This was underpinned by a lack of confidence about their technical knowledge of being a GP: I think a lot of female GPs worry that they don't know enough to supervise or to teach… they think they do have to teach that really technical knowledge… the old imposter syndrome. [ID17] This concern was generally reflected by woman GPs who were early in their career: …you have that imposter syndrome. I wasn't sure, I was only three years post-fellowship and I thought, "What am I going to teach these guys?” [ID3] One woman GP in early-career noted that this was heightened at the thought of being a main supervisor: ...we've only had one registrar since I've been officially a supervisor… I was a little bit terrified anyway, to be their main supervisor… [ID13] This woman GP supervisor worked in an education-focused practice with very experienced senior supervisors, which made her question her supervision identity: I don’t know if it was just the caseload, or just me learning how to be a supervisor, as the challenge. [ID13] She felt the feedback about her genuine value to the supervision team played a part in legitimising her contribution, …they think I’ve got things to offer with the topics or experiences and style that I have… they think I’ve got something to give . And it also helped that she was supported to learn the supervision role by other senior supervisors rather than coming in as a supervisor expert: I was quite intimidated initially and I'm like, "I have no idea what I'm doing." I still feel that, but …knowing that I'm going to be taught how to teach has helped. [ID13] Some women GPs wanted to begin supervising early post fellowship because, at this time, they had fresh knowledge of the training and assessment processes: …as a new fellow, you kind of know the system, you know the exam process, you know you're pretty up to date with all that kind of thing. Probably if it was easy enough to have done it, I would've done it probably straight away… I think it's a bit harder now... Exams have changed… I am no longer up to date... I don't have those resources anymore... [ID7] Mid- and later-career GPs also described personal experiences of imposter syndrome. This was based around lacking confidence about GP exams and current clinical standards. Women GPs overcame imposter syndrome about this perceived knowledge gap by reflecting on their real-world knowledge and their capacity to role-model work-life balance: ...there's no point me teaching them about heart disease or how to read an ECG because his being a cardiology registrar, he's better at that than I am anyway. But I can teach more about the art [of being a GP]… I try to do role modelling… balancing kids and work. [ID17] Mid- and later-career women overcame imposter syndrome by having a couple of people who are dual supervisors and can bounce stuff off each other … just helping with that confidence [ID17] ; and using self-reflection to build comfort with not having all the answers: I've got enough self-checks on myself to not doubt myself too much…. I’m not pretending to be anything more than what I am… I'm quite happy as a supervisor to say, "I don't know either." [ID16]. Women GPs also reflected on their unique style of medicine, types of patients, and the style of teaching they experienced in their own training for building identity as a supervisor: … there's definitely male general practice medicine which is quite different to the things females see for the most part. Style of teaching I think varied a bit too, for me anyway, in terms of female and male. [ID7] Women GPs’ stories reflected that they became increasingly assertive about their value as they matured in their career: It's only now that I'm quite old that I can actually behave more like a wicked witch and stand up for myself..” [ID14]. There was a perception that the next generation of women GPs would likely be more assertive: ... I think [the next generation of women] … have less barriers because they're all more assertive…than maybe my generation was, or maybe I am. [ID2]
A description of the narrative for each story arc is presented with exemplars below.
Several women GPs working as non-practice owners described having been asked to take on supervision without being fully informed, and at times being misinformed, about the role. This pattern had the potential to repeat as women GPs moved between practices. I fell into it, so it actually happened when I was working in [regional centre], and the practice manager just handed a form and said, "Can you sign this, because we need an extra person to supervise?" And I think I did say something like, "So long as you're not expecting me to actually do anything," she said, "No, we're not," but of course they were. So that's how I sort of fell into it, and then…, when I changed jobs, one of the things they said [in the new practice] is they need a supervisor, which was fine. I didn't realize that they actually needed a primary supervisor, I didn't realize that that's where that was heading. So, that's how I got into GP supervision. [ID1] One woman GP, working in the same practice for over 20 years as a non-practice owner, related a similar story of being nominated as an official supervisor on formal paperwork without her consent: I actually didn’t put down my name to actually be an official … supervisor. Although I was teaching, I was doing it in an unofficial manner. But my boss [practice owner] took it on himself with his wife to forge my signature to say that I was going to be prepared to be doing this teaching. One day, three registrars arrived. [ID8] She found this frustrating because her efforts to accommodate the situation were not acknowledged: …I had my nose a little bit out of joint, because [practice owner/boss] didn't pay me anything, I didn't get any thanks. It was just sort of assumed, okay, well now you've agreed to it, goodbye. [ID8] When asked to supervise again later at the same practice, she sought more control over the process including asking for payment - “I’m going to actually speak to [practice owner/boss]. And so I asked him to pay me…” [ID8] - but this resulted in her being excluded from ongoing supervision opportunities: …when the next one came without telling me I suddenly did not become the supervisor and I haven't been the supervisor since. [ID8] She subsequently only supervised informally, disjointed from the formal supervision team but contributing in a way that she had control over. Women GP supervisors related doing a substantial proportion of informal supervision without recognition or authority: I would be sitting in a room with a junior registrar, the GPT1 in the next room on their first two weeks, and on the weekends, because I worked a lot of Saturdays. And there'd be no recognition. There wasn't even a thank-you for doing it. And that didn't make me want to give it up, but it made me really [upset]. [ID2] Several women GPs also told of having a lack of power in relation to overseeing male registrars: I wondered sometimes if he just culturally struggled with having female supervisors. I often felt like he didn't listen or take things in as much from me as he might've from my male colleague. [ID5] Another woman GP supervisor sought help when she felt that a male registrar was not receptive to her feedback and advice, but the situation remained unresolved due to a perceived lack of empathy from male superiors: I remember telling … [one of my] superiors about it... I was trying to explain to him the trouble I was having with this registrar, and how he wasn't receiving feedback. But this other, this contact person, was also a male GP, and I don't think he really fully understood that. He kind of just brushed it off. [ID11] The counter-narrative about lack of control and power for women GP supervisors was seen to develop where women GP supervisors who were non-practice owners led all aspects of supervision for their practices and were acknowledged for doing so. These women had strong agency and enjoyed building a supervision model: I just did it all... everything, including all the admin of it and getting all the registrars and all of the supervising of the registrar. So it was the only way I could get it running…if I just did it… my purpose with that was about making [supervision of registrars] sustainable. [ID9] However, there was not practice support to sustain the teaching and learning model this GP had planned: they just didn't want to be involved. [ID9]
Some women GPs who were working as non-practice owners had not questioned why they had never been remunerated for supervising. When other women GP colleagues alerted them to the availability of funding for teaching, they approached male practice owners to talk about getting paid for the teaching aspect of supervision work: I didn't even know that practices or supervisors were being remunerated… one other colleague [senior female GP] said, "Oh, don't you know that they do get paid?" I was like, "Oh, no idea," they're like, "You should ask."[ID6] Among others, one mid-career woman GP working as a main supervisor and a non-practice owner, approached the practice manager about not getting paid: I just wasn't getting any answer. And I was going on more and more teaching… I phoned up the owner, just to try to talk to him directly, because I wasn't getting any sense from the manager, and the owner said, "I've got to make an income, those payments are mine. You should do the training in your own time." [ID4] She was unsuccessful in the negotiations and ended up leaving the practice feeling unappreciated: … one of my colleagues said, "… I think this is time for us to leave." And so, both of us, we decided to move to a practice that appreciated teaching and that appreciated we weren't just magically becoming GPs and there was a point of teaching. [ID4] Other mid-career women GP supervisors, working as non-practice owners, added to the story that, over their careers, they had become more willing to advocate for remuneration of their teaching in supervision work or relinquish it: I think when all you do is work in general practice and you're not remunerated for your contributions for teaching, but your income drops because you're providing supervision and teaching… I wouldn't accept that now, whereas in the past I just accepted that as the way it was…now I'm old enough and cranky enough, I just say, "No, sorry, can't help you." [ID10] Countering this narrative, some women GPs who led teaching and learning in their practice as practice owners or educational leaders (working on a contract) recounted that they ensured supervision payments were distributed to the GPs who were teaching: ...it has been the practice principals who have been doing the teaching most of the time, but if we have had someone, one of the other contractors [non-practice owner] doctors doing a teaching session for us, we have always given them the equivalent share of the teaching payment. We feel like that's only fair. [ID5] However, even for those receiving payment for supervising, the payments were considered small relative to earnings that women GPs could make from billings: ...when I spent the morning sitting in with my registrar yesterday, I sort of got my normal supervision payments but essentially that is nothing compared to if I was seeing my own patients, because the registrar's done all the billing for that… if it wasn't a financial disincentive, that would be nice. [ID17] One early-career GP noted pay was a necessity given the time she spent on supervision: I think the income support actually was a big factor…I perhaps was a bit naive about how much time it would take, especially because I want to do it properly. [ID13] Pay was also an issue for a mid-career woman GP, particularly in relation to choosing work which did not disadvantage her for working part-time and having had more breaks in practice: I did want to get paid for what I was doing. I think that the finances do come into it; I think it's got to be adequately remunerated… women tend to work less hours and have time off, so that might be more of an issue than for a man possibly. [ID8] Women GPs who were single and/or co-parenting also reflected that the pay is important as part of a portfolio of income that they relied upon to live; and if the pay for supervision was inadequate it added to the reasons to cease supervision: [leaving supervision] …it was just a combination of … income pressures, and also a registrar that's just not appreciative …. I just decided, "No, I can't do it. I just have to look after myself." [ID4]
Many women GPs identified with being approachable and available as supervisors, but noted that this increased the time they dedicated to supporting registrar learning and well-being, whether or not they were the main supervisor: They know that they can call me at any time. I have no hesitation, I'd rather them call me than not. [ID3] One woman GP noticed that a group of her early-career women colleagues were frequently used for informal support because the main male supervisor was not considered approachable: …the [official] supervisor was male, and then there were at least three of us that were female …the interesting comment was that "Oh yeah, you guys are more approachable, you guys are more accessible. It's easier for me to come and ask you questions and ask for some advice as opposed to going to my supervisor." [ID6] They tried to resolve some of the boundary issues with the registrar and ensure the main supervisor was accountable to his role. However, the situation was not resolved, and this eroded the women GPs’ time with their own patients whilst they supported the needs of the registrar. This caused frustration: ...it's typically not part of our job description … We don't mind, helping out here on the odd occasion, but if suddenly someone is calling you 5, 6, 7 times in a session … It does take out time from your own practice. [ID6] Women GP supervisors also discussed being frequently called upon for advice on issues for which they were perceived as experts and these could be the more time consuming cases: …They don’t have 6 problems, they have like 20, and there are like 35 tablets. Apparently, I’m the juggler of that. [ID16] Women GPs described fitting supervision around a broader scope of medicine they practised: … our consults can be more complex and time-consuming compared to some of my male colleagues... It can be hard when you're talking to someone about a mental health issue…then you're kind of interrupted to talk to a registrar to do teaching. It can get quite awkward. [ID15] The time that women GPs dedicated to supervision increased if registrars were junior, unsafe, and/or under-performing: ...we haven't had a [first or second 6-month term registrar]… for a while and they're always the ones that are very time-consuming … they ring you a lot, then that's exhausting. [ID5] This scenario led some women GP supervisors to consider giving up supervision work: We've certainly had in the last couple of years, a few... a couple of tricky registrars, and this practice has been taking registrars for probably 25 years. And the staff are exhausted [ID5] When women took a break from supervising, they felt relief: It gives the time to do other stuff... I'll sit down and do some other things as well, rather than having to worry about where the learning needs are.... more time to just do my own learning. [ID5]
Women GPs reflected on both unpredictable or fixed commitments in their personal lives and how supervising over a fixed learning term would align with this: Outside my personal things and my work… knowing that if you're going to commit for an official thing [supervising], you're committing for six months. And so, again, I would hate to take on a role and then say, after three months, “oh, sorry, I can't do that.” [ID8] Other women GPs yet to start families anticipated major breaks in practice when doing this, which could disrupt supervision momentum: For me obviously, now I'm going to have a kid. Then go out, search what I need to... Work out what I need to do to fill out the paperwork, go to the training days. Those kinds of things I think will make it harder for me, definitely… [ID7] There was a sense of unpredictability related to raising young children which could make it harder to supervise: ...that is a significant thing to think about, having children. And it is something that I don't think male …GPs and male supervisors have to really think about as much…the hard part is the unpredictability of it. [ID11] Primary school-aged children were noted to impose scheduled demands which were difficult for women GPs to manage around supervision tasks, including completing paperwork: I've been more conscious since having a family... You've got more deadlines. You've got to finish on time, to get home for children. Previously, it didn't matter if I worked late or stayed late to do paperwork. But it matters now. [ID12] However, the capacity to juggle children around supervision roles improved if women were in a supportive and flexible practice and the childcare supports were nearby: ...the practice has always been really family-friendly, …really good at juggling. My babies always came to tutes with me and stuff like that, it was never an issue for clinical meetings. [ID17] However, this supervisor still reported that supervision took an additional effort: even if we have someone at home, juggling kind of the family life and that sort of thing, and needing to be home for kids or go to other things, sometimes it’s just an extra thing to do. [ID17].
Women GPs related stories which showed they were intrinsically motivated to provide quality teaching and learning to create a positive experience for registrars. Correspondingly, they were reticent to get involved in supervision unless they could meet their personal standards, overlapping with the theme about their other life commitments: I don't know that I would want to go into being a supervisor if I can't do it properly. If can't do it properly, can't do it well, then what's the point? [ID7] Women GPs related a belief that best practice meant going above and beyond the minimum requirements by developing and nurturing the supervisor-registrar alliance: I probably do a bit more than what is prescribed, I guess, [or] expected. And a part of it is because I think it's really like I value registrars and I think that they should get a really good experience, and part of it is for me as well: because I want to make sure that I'm comfortable. [ID3] Women GPs aimed to build skills and find the right practice culture to foster their capacity to supervise to a high standard: ...if I was in the right environment, yeah, I would consider being a supervisor…[and] do the training. [ID7] However, women GPs’ stories indicated a lack of specific educational support and guidance to enhance their understanding and benchmarking of the supervision role, which led these women to search for information themselves: I don't remember doing any specific education on how to be a supervisor. I had to figure out a lot of stuff myself or by asking other people. [ID7] This imposed a level of responsibility for women GPs commencing supervision such that others may not know about the process, or they don’t know what’s involved, and they might feel this sense of burden or responsibility because they don’t know to what level they’ll have to have to start supervision. [ID15]. Women GPs actively pursued ways to develop their supervision skills by reflecting on their own experience as a supervisor and through teaching medical students, or benchmarking from being involved in external clinical teaching visits (ECTVs): I feel like I've built up more skills over time, with teaching and training and learning and feedback. [ID12] I want to be involved…to do ECTVs, to make sure that I... build my skills in supporting registrars. [ID13] Women GPs identified the need for backup support for registrars, showing preference for working as part of a team of supervisors: … it's nice to have other colleagues there... if I'm not there, there's two other people…[so] I know that the registrar's not alone…. It's also nice to have another supervisor or two to discuss feedback and your thoughts… just to bounce things off others. [ID3] She also sought wider opportunities to exchange ideas with GP supervisors from other practices.
Women GPs related stories of having imposter syndrome when they started supervising. This was underpinned by a lack of confidence about their technical knowledge of being a GP: I think a lot of female GPs worry that they don't know enough to supervise or to teach… they think they do have to teach that really technical knowledge… the old imposter syndrome. [ID17] This concern was generally reflected by woman GPs who were early in their career: …you have that imposter syndrome. I wasn't sure, I was only three years post-fellowship and I thought, "What am I going to teach these guys?” [ID3] One woman GP in early-career noted that this was heightened at the thought of being a main supervisor: ...we've only had one registrar since I've been officially a supervisor… I was a little bit terrified anyway, to be their main supervisor… [ID13] This woman GP supervisor worked in an education-focused practice with very experienced senior supervisors, which made her question her supervision identity: I don’t know if it was just the caseload, or just me learning how to be a supervisor, as the challenge. [ID13] She felt the feedback about her genuine value to the supervision team played a part in legitimising her contribution, …they think I’ve got things to offer with the topics or experiences and style that I have… they think I’ve got something to give . And it also helped that she was supported to learn the supervision role by other senior supervisors rather than coming in as a supervisor expert: I was quite intimidated initially and I'm like, "I have no idea what I'm doing." I still feel that, but …knowing that I'm going to be taught how to teach has helped. [ID13] Some women GPs wanted to begin supervising early post fellowship because, at this time, they had fresh knowledge of the training and assessment processes: …as a new fellow, you kind of know the system, you know the exam process, you know you're pretty up to date with all that kind of thing. Probably if it was easy enough to have done it, I would've done it probably straight away… I think it's a bit harder now... Exams have changed… I am no longer up to date... I don't have those resources anymore... [ID7] Mid- and later-career GPs also described personal experiences of imposter syndrome. This was based around lacking confidence about GP exams and current clinical standards. Women GPs overcame imposter syndrome about this perceived knowledge gap by reflecting on their real-world knowledge and their capacity to role-model work-life balance: ...there's no point me teaching them about heart disease or how to read an ECG because his being a cardiology registrar, he's better at that than I am anyway. But I can teach more about the art [of being a GP]… I try to do role modelling… balancing kids and work. [ID17] Mid- and later-career women overcame imposter syndrome by having a couple of people who are dual supervisors and can bounce stuff off each other … just helping with that confidence [ID17] ; and using self-reflection to build comfort with not having all the answers: I've got enough self-checks on myself to not doubt myself too much…. I’m not pretending to be anything more than what I am… I'm quite happy as a supervisor to say, "I don't know either." [ID16]. Women GPs also reflected on their unique style of medicine, types of patients, and the style of teaching they experienced in their own training for building identity as a supervisor: … there's definitely male general practice medicine which is quite different to the things females see for the most part. Style of teaching I think varied a bit too, for me anyway, in terms of female and male. [ID7] Women GPs’ stories reflected that they became increasingly assertive about their value as they matured in their career: It's only now that I'm quite old that I can actually behave more like a wicked witch and stand up for myself..” [ID14]. There was a perception that the next generation of women GPs would likely be more assertive: ... I think [the next generation of women] … have less barriers because they're all more assertive…than maybe my generation was, or maybe I am. [ID2]
The aim of this narrative inquiry was to explore the perspectives of women GP supervisors in Australia to facilitate diversity and capacity in the GP supervision workforce. Our findings suggest that there are intersecting experiences which could underpin women GPs’ willingness to take up and sustain supervision roles. Women GPs in general practices who are non-practice owners may lack agency in business decisions related to supervision, such as formal recognition for supervision and payment of teaching allowances. Similarly, stories from early-career women GPs speak to a lack of recognition and remuneration for teaching, which can deter them from taking on supervision in the future; mid- and later-career women GPs shared these experiences and opted to cease supervision unless they were given recognition and remuneration for paid components of the role. These findings are consistent with the extant literature about inequality regimes arising from complex, systemic, and interlinked inequalities in workplace practices and processes . We found that women GPs commonly contributed to teaching and supervision work in informal ways that benefited practices. However, this labour was often hidden and without pay or recognition, despite the availability of remuneration for teaching and women GP supervisors placing a premium on the quality of their supervision, as well as their availability and approachability as supervisors. The investment that women GPs make in supervision, albeit sometimes without recognition or reward, is likely motivated by interests to protect the safety of learners in the practice and to foster trainee doctors interested in working as GPs and in the practice in the future. However, it also suggests a gendered substructure to practice supervision, where inequalities are built into women’s roles and women supervise around a set of explicit and implicit rules related to gender and their position within the practice . An important issue of gender equity within contemporary medicine is ensuring that female GPs, particularly those in early-career, have access to information about supervision roles within different practices, payments for supervision relative to the different roles, the process to get involved, and access to a support network such as a peak body. It may also benefit women GPs if supervision policies require that practices consult with women GPs for their inclusion in supervision teams, documents the roles of supervision teams, and provides a clear statement about supervision activities that are eligible for remuneration. Women GPs in this study also reported being asked to support the teaching of sensitive topics like women’s health, mental health, and sexual health. While this provides diversity in practice teaching, it can also create tensions for women GPs trying to navigate caring for their own patients and managing registrar interruptions. Despite their value as supervisors, women GPs expressed a lack of confidence in supervising and seek validation through their contributions to technical or real-world medicine that they bring to the role. It is possible that this belief relates to women fitting into the socio-relational context of medicine as an historically male profession and assuming a gendered role which reinforces inequality . The respondents generally believed that the next generation of women GPs would break through this stereotype and be more assertive; however, this will largely depend on better acknowledgement of the valued contribution women GP supervisors might make at all stages of their career. Confidence could also be supported through more woman-specific mentorship networks for supervision. Such professional networks could be useful at practice, regional, state, and national levels, and might assist women GPs to develop their supervision skills and confidence and improve awareness of and access to useful resources. This aligns with other research suggesting the need for effective and sensitive professional development and accommodation of different work patterns as the GP workforce is feminised . A major consideration for policies seeking to enhance women GPs’ uptake and sustainability of supervision is enabling women to enter and re-enter supervision roles across their career span. Australia remains relatively wedded to the normative, unencumbered, male worker archetype which relegates women to juggling paid and unpaid work . However, many GPs also choose not to exclusively focus on work, which is aligned with research that suggests generalist doctors are motivated by social and family values as part of their career choices . As a result, supervision roles in general practice need to accommodate both professional and (unpredictable and fixed) non-professional roles that women, and men, adopt across their life course. A key option to manage this is to promote team-based supervision where women GPs have a clear role and can make a quality contribution through a shared commitment. This may allow women to develop and retain their identity as a supervisor for longer, and to leave and re-enter supervision if they take breaks from work or work part time. This research was exploratory and was limited to Australia, but it is the first to explore the lived experience of women GP supervisors. The study was well-subscribed by women at various stages of their GP supervision careers, providing a rich opportunity to reflect on different narratives. We acknowledge, however, the possibility that more interviews could have exposed a wider range of stories. Australia has a unique GP training system and women GPs may have different experiences in other countries. This research was designed to explore the stories of Australian women GPs and, although it did not take a gendered lens, the findings do relate to feminist theory. It will be important to expand on this research to explore male GP supervisor experiences to confirm whether a gendered interpretation is valid.
This research expands understanding of the lived experience of Australian women GP supervisors as they navigate taking up and managing supervision roles. The research points to story arcs which were about power and control, pay, time, other life commitments, quality of supervision, and supervisor identity. These represent significant issues that intersect to potentially impact the interest and capacity for women to join and be retained in the GP supervision workforce. The findings can be applied to developing more specific resources, supports, and structures to enable women to participate in and sustain GP supervision at the level that they find acceptable and rewarding.
Additional file 1. Semi-structured Interview guide. Tabulated semi-structured interview questions and prompts. Additional file 2. Story arc framework. Tabulated story arc framework and description.
|
Redefining Diabetes Care: Evaluating the Impact of a Carbohydrate-Reduction, Health Coach Approach Model in New Zealand | 34caa62e-0fe4-4d67-91f8-c3d4d675480f | 11688139 | Health Literacy[mh] | Type 2 diabetes (T2D) represents a substantial and growing global health challenge , straining healthcare systems and frequently leading to a range of complications . While traditionally prediabetes (PD) and T2D have been regarded as progressive, chronic ailments necessitating lifelong management, emerging research indicates that these conditions can be reversed or put into remission through dietary and lifestyle changes. Although there are several dietary approaches that can be used to manage PD and T2D , reducing overall carbohydrate intake shows the best evidence for improving glycemia , even independently of weight loss . Although reduced carbohydrate diets have been incorporated into national diabetes dietary guidelines worldwide for several years , this has not yet formally taken place in New Zealand (NZ) . However, the uptake and practice of this approach are rapidly expanding in primary care in NZ, as demonstrated by our ongoing work in this area . This paper addresses the preventable and potentially reversible nature of PD and T2D by examining a primary care strategy focused on whole food and carbohydrate reduction for diabetes management and reversal. The model is supported by holistic healthcare delivery, based on the health coach approach. The significance of addressing PD and T2D lies not only in their high prevalence but also in the significant socioeconomic burdens they impose. Managing chronic diseases places a substantial burden on healthcare systems and individuals . In NZ, as in many other parts of the world, the escalating diabetes epidemic calls for innovative and sustainable solutions, particularly in light of concerning figures around general practitioner (GP) burnout, shortages, and intention to retire . Moving away from a system that is heavily reliant on GPs and instead utilising other allied healthcare professionals may, therefore, have multiple advantages, particularly given the current constraints faced by the usual 15-min GP appointments. Health coaching is a relatively new concept in NZ and, until recently, has not been widely embedded in the primary healthcare system. However, substantial development in growing this workforce has meant an increasing number of GPs are now able to refer patients to health coaches and wellbeing advisors, called Health Improvement Practitioners. These healthcare providers are now employed either within a GP clinic or in a Primary Health Organisation (a cluster of clinics which work together to care for patients who are registered with them) . While the importance of educating individuals to manage their diabetes has been recognised since the 1930s , current healthcare models often fall short in equipping patients with the skills and motivation essential for effective and sustained diabetes management through lifestyle interventions . While tight metabolic control can delay or prevent diabetes complications, the motivation and ability of patients to take on this responsibility vary greatly . Consequently, significant segments of the diabetic population remain underserved, and the nature of healthcare delivery is critical if it is to be successful in bringing about sustained positive health outcomes. Given that health coaching is based on the principles of behaviour change and emphasizes a personalised approach with regular patient interaction, it presents a promising solution for managing PD and T2D through dietary and lifestyle changes. By customizing intervention strategies to meet the specific needs and circumstances of each patient, health coaching is aimed at enhancing patient engagement and, importantly, improving adherence, thereby eliciting sustained outcomes. This study explores the experiences of patients with PD and T2D, along with healthcare professionals, in a holistic care model centred on whole-food carbohydrate reduction. The model is holistic and multidisciplinary and combines a health coaching approach with a carbohydrate-reduction dietary approach. In this context, health coaching represents a style of working with individuals to facilitate positive changes in their health and well-being. It encompasses a personalised and culturally sensitive approach, acknowledging the complexity of behavioural change and embedding it in their practice while respecting individual differences. Health coaching necessitates collaborative efforts between patients and healthcare providers, empowering patients with the knowledge to take control of their own lives. Working within this comprehensive approach, our study is aimed at better understanding the effectiveness of the model in empowering patients, healthcare practitioners, and the healthcare system to manage diabetes effectively. Further, it is aimed at gaining insights into the feasibility of scaling up this model.
2.1. Model Characteristics in Two NZ Practices The study examined two distinct healthcare systems in NZ—one public and one private. While each system operates uniquely, they share several common elements in their approach to managing T2D. Key facts are displayed in with full details. 2.1.1. Practice 1 This primary care private practice is spearheaded by a GP with over 30 years of experience in medicine. In the past 3 years, they integrated a therapeutic carbohydrate-reduction eating and coaching approach into their practice and regularly consult with patients via GP appointments. Additionally, the GP conducts weekly educational meetings, focusing on diabetes management and the benefits of a reduced carbohydrate diet for reversing the condition. Complementing these efforts, patients have the opportunity to work closely with a health coach upon GP referral, further individualizing their care and enhancing their understanding and application of these dietary principles. The patient population is predominantly NZ European with a range of sociodemographic backgrounds. 2.1.2. Practice 2 This publicly funded primary care practice is a Māori Provider Trust based on indigenous principles offering family-based wellness and social services. The diabetes care model is led by a GP and Clinical Director who has been a proponent of carbohydrate reduction as a dietary approach for diabetes management since 2015. Once a GP has seen the patient, they refer them to the health coach and accompanying supporting initiatives. These services are comprehensive and offer patients access to health coaches, biweekly support groups conducted by the clinical director, and weekly cooking classes focused on reduced carbohydrate recipes. The practice also offers a broad spectrum of programmes, encompassing various aspects of well-being, not limited to dietary interventions. All of these services are available to patients as part of their enrolment at the practice. This multifaceted approach not only caters to the medical needs of patients but also addresses the cultural and community aspects of health, ensuring a holistic and culturally sensitive treatment pathway. Patients at this practice predominantly identify as belonging to Māori and Pacific Islander communities, with the practice located in an area characterised by lower socioeconomic status. 2.2. Data Collection The study employed a mixture of one-on-one interviews with healthcare professionals and focus groups with patients. The one-on-one interviews were conducted virtually, while focus groups were held in person, in each respective region of NZ. All focus groups and interviews were carried out between November 2022 and April 2023. Participation was based on voluntary decisions, and informed consent was provided before all scheduled interviews and focus group sessions. Ethical approval for this study was granted by the Auckland University of Technology Ethics Committee (AUTEC), reference number 22/253. 2.3. Focus Groups The focus groups were designed with a semi-structured approach, enabling comprehensive discussions on the experiences of patients with the model of care and their perspectives on the reduced carbohydrate approach. Focus groups included patients from both practices. In all cases, participants were greeted by members of the research team who identified with both Māori and Pacific Island ethnicities. They then welcomed the participants in their native languages using traditional protocols to open the focus group and allow cultural connection. As a token of appreciation of the participants' time, a small gift of petrol vouchers and What the Fat! —a published book on carbohydrate reduction authored by the lead researchers of this study—were provided. From Practice 1, 22 patients participated and were divided into five focus groups (two groups of six patients, two groups of three, and one group of four). From Practice 2, 24 patients participated and were divided into four focus groups (comprising 10, six, four, and four patients, respectively). Patients were recruited via advertisements placed on social media and messaging platforms. The recruitment for focus groups continued until data saturation was achieved, which occurred after the completion of these sessions. 2.4. Interviews With Health Professionals Parallel to patient focus groups, one-on-one interviews were conducted with health professionals from each practice. Interviews were held with two professionals (one doctor and one health coach) from Practice 1 and four professionals (one doctor and three health coaches) from Practice 2. 2.5. Data Analysis All interviews and focus groups were recorded and transcribed verbatim using Otter AI Pro software Version 3.44.2-240223-4aa344c2, followed by manual verification by a member of the research team. The transcripts were then analysed using inductive thematic analysis via NVivo analytic software (Release 1.6.1 (1137), QRS International Pty Ltd). Data analysis was independently carried out by two authors (L.S. and M.P.). To ensure the reliability and validity of the thematic analysis, two additional authors (C.Z. and J.L.C.) reviewed all identified themes, refining categories through merging and subdividing where necessary. Quotations used throughout this report have been lightly edited to facilitate readability and to maintain the anonymity of participants/individuals mentioned by participants.
The study examined two distinct healthcare systems in NZ—one public and one private. While each system operates uniquely, they share several common elements in their approach to managing T2D. Key facts are displayed in with full details. 2.1.1. Practice 1 This primary care private practice is spearheaded by a GP with over 30 years of experience in medicine. In the past 3 years, they integrated a therapeutic carbohydrate-reduction eating and coaching approach into their practice and regularly consult with patients via GP appointments. Additionally, the GP conducts weekly educational meetings, focusing on diabetes management and the benefits of a reduced carbohydrate diet for reversing the condition. Complementing these efforts, patients have the opportunity to work closely with a health coach upon GP referral, further individualizing their care and enhancing their understanding and application of these dietary principles. The patient population is predominantly NZ European with a range of sociodemographic backgrounds. 2.1.2. Practice 2 This publicly funded primary care practice is a Māori Provider Trust based on indigenous principles offering family-based wellness and social services. The diabetes care model is led by a GP and Clinical Director who has been a proponent of carbohydrate reduction as a dietary approach for diabetes management since 2015. Once a GP has seen the patient, they refer them to the health coach and accompanying supporting initiatives. These services are comprehensive and offer patients access to health coaches, biweekly support groups conducted by the clinical director, and weekly cooking classes focused on reduced carbohydrate recipes. The practice also offers a broad spectrum of programmes, encompassing various aspects of well-being, not limited to dietary interventions. All of these services are available to patients as part of their enrolment at the practice. This multifaceted approach not only caters to the medical needs of patients but also addresses the cultural and community aspects of health, ensuring a holistic and culturally sensitive treatment pathway. Patients at this practice predominantly identify as belonging to Māori and Pacific Islander communities, with the practice located in an area characterised by lower socioeconomic status.
This primary care private practice is spearheaded by a GP with over 30 years of experience in medicine. In the past 3 years, they integrated a therapeutic carbohydrate-reduction eating and coaching approach into their practice and regularly consult with patients via GP appointments. Additionally, the GP conducts weekly educational meetings, focusing on diabetes management and the benefits of a reduced carbohydrate diet for reversing the condition. Complementing these efforts, patients have the opportunity to work closely with a health coach upon GP referral, further individualizing their care and enhancing their understanding and application of these dietary principles. The patient population is predominantly NZ European with a range of sociodemographic backgrounds.
This publicly funded primary care practice is a Māori Provider Trust based on indigenous principles offering family-based wellness and social services. The diabetes care model is led by a GP and Clinical Director who has been a proponent of carbohydrate reduction as a dietary approach for diabetes management since 2015. Once a GP has seen the patient, they refer them to the health coach and accompanying supporting initiatives. These services are comprehensive and offer patients access to health coaches, biweekly support groups conducted by the clinical director, and weekly cooking classes focused on reduced carbohydrate recipes. The practice also offers a broad spectrum of programmes, encompassing various aspects of well-being, not limited to dietary interventions. All of these services are available to patients as part of their enrolment at the practice. This multifaceted approach not only caters to the medical needs of patients but also addresses the cultural and community aspects of health, ensuring a holistic and culturally sensitive treatment pathway. Patients at this practice predominantly identify as belonging to Māori and Pacific Islander communities, with the practice located in an area characterised by lower socioeconomic status.
The study employed a mixture of one-on-one interviews with healthcare professionals and focus groups with patients. The one-on-one interviews were conducted virtually, while focus groups were held in person, in each respective region of NZ. All focus groups and interviews were carried out between November 2022 and April 2023. Participation was based on voluntary decisions, and informed consent was provided before all scheduled interviews and focus group sessions. Ethical approval for this study was granted by the Auckland University of Technology Ethics Committee (AUTEC), reference number 22/253.
The focus groups were designed with a semi-structured approach, enabling comprehensive discussions on the experiences of patients with the model of care and their perspectives on the reduced carbohydrate approach. Focus groups included patients from both practices. In all cases, participants were greeted by members of the research team who identified with both Māori and Pacific Island ethnicities. They then welcomed the participants in their native languages using traditional protocols to open the focus group and allow cultural connection. As a token of appreciation of the participants' time, a small gift of petrol vouchers and What the Fat! —a published book on carbohydrate reduction authored by the lead researchers of this study—were provided. From Practice 1, 22 patients participated and were divided into five focus groups (two groups of six patients, two groups of three, and one group of four). From Practice 2, 24 patients participated and were divided into four focus groups (comprising 10, six, four, and four patients, respectively). Patients were recruited via advertisements placed on social media and messaging platforms. The recruitment for focus groups continued until data saturation was achieved, which occurred after the completion of these sessions.
Parallel to patient focus groups, one-on-one interviews were conducted with health professionals from each practice. Interviews were held with two professionals (one doctor and one health coach) from Practice 1 and four professionals (one doctor and three health coaches) from Practice 2.
All interviews and focus groups were recorded and transcribed verbatim using Otter AI Pro software Version 3.44.2-240223-4aa344c2, followed by manual verification by a member of the research team. The transcripts were then analysed using inductive thematic analysis via NVivo analytic software (Release 1.6.1 (1137), QRS International Pty Ltd). Data analysis was independently carried out by two authors (L.S. and M.P.). To ensure the reliability and validity of the thematic analysis, two additional authors (C.Z. and J.L.C.) reviewed all identified themes, refining categories through merging and subdividing where necessary. Quotations used throughout this report have been lightly edited to facilitate readability and to maintain the anonymity of participants/individuals mentioned by participants.
Focus group participants spanned a wide range of ages and ethnic backgrounds; demographic characteristics are detailed in . The inductive analysis identified five major themes: (1) reduced carbohydrate lifestyles, (2) health coaching, (3) implementation of the model, (4) empowerment, and (4) sustainability of the model. Within these broad categories, additional subthemes were identified, as presented in . Key Themes and Discussion Points Throughout each section of the results, each theme is described from the perspective of both patients and health practitioners; accompanying quotes are presented in Tables , , , , and . 3.1. Theme 1: Reduced Carbohydrate Lifestyle (Diet and Lifestyle Approach) 3.1.1. Perceptions of Reduced Carbohydrate Diets Patients transitioning to reduced carbohydrate diets experienced a significant shift in their dietary habits and gained a new understanding of what constitutes healthy food for T2D, challenging conventional beliefs about low-fat diets being the healthier dietary option. This was also noted in relation to the carnivore (meat only) diet, which several patients were following despite them initially having reservations about eating so much meat. 3.1.2. Positive Experiences Patients implementing a reduced carbohydrate diet reported numerous benefits, including significant weight loss without hunger, a healthier, addiction-free relationship with food, and more stable energy levels for both themselves and their families. Many individuals observed substantial improvements in HbA1c levels and were able to reduce or discontinue medication. The diet also had a positive impact on a range of other health conditions, including heart palpitations and polycystic ovarian syndrome. All health coaches emphasised their role in creating a positive patient experience by helping patients find enjoyable foods and make small changes within culturally accepted dietary habits. Healthcare professionals also spoke about positive changes they had observed in patients, including improvements in glycaemic control, body weight, and the extent to which medications were needed. They further believed that the approach should be more widely utilised, and that the health system needed to recognise carbohydrate reduction as an effective method for treating and managing PD and T2D. 3.1.3. Barriers Patients identified multiple barriers to adopting or maintaining a reduced carbohydrate eating approach. Key issues included scepticism, resistance, and lack of up-to-date knowledge from healthcare professionals (particularly GPs and dietitians), and national organisations such as the New Zealand Society for the Study of Diabetes (NZSSD), Diabetes New Zealand, and the National Heart Foundation of New Zealand. Additionally, patients noted a lack of consensus and varying opinions within the low-carbohydrate community, causing confusion about optimal dietary choices. The influence and conflicts of interest from the food and pharmaceutical industries were a significant concern, alongside the cost of lower carbohydrate products. Patients suggested that more support from supermarkets, restaurants, and cafes was needed. Social challenges included negative perceptions from friends and family, with concerns about health impacts and disapproval of dietary choices. Special occasions such as holidays and social gatherings posed particular difficulties, with the temptation of high carbohydrate foods and social pressure to conform. Healthcare professionals also highlighted the reluctance of some GPs to endorse carbohydrate-reduction approaches. The divergence from traditional dietary guidelines was seen as the main reason for this. Health professionals agreed that the carbohydrate-centric focus of both the national dietary guidelines and the diabetes-specific dietary guidelines in NZ were one of the biggest barriers faced, alongside the noted lack of support from GPs and dietitians. presents supporting quotes aligning with these subthemes. 3.2. Theme 2: Health Coaching 3.2.1. Individualised Care That Is Culturally Appropriate Patients with PD and T2D highly valued individualised care, where health coaches tailored their approach to each individual within a general framework. They appreciated the one-on-one attention, problem-solving, attention to detail, and thorough explanations. Patients also noted the genuine interest displayed by health coaches and their consideration of the individual's cultural background. Healthcare professionals stressed that personalising care was essential for success and very different from standard interactions between GPs and patients. Individualising care was felt to increase the accessibility of behavioural change, for example, by creating budget-friendly meals and helping patients to access local food while considering transportation limitations. Similarly, health coaches strongly stressed the importance of cultural appropriateness in healthcare. They further noted that being a health coach, who was reflective of the community they serve, was seen as an advantage. More generally, healthcare professionals highlighted the adaptability of their approach to suit the backgrounds of their patients. 3.2.2. Holistic, Wraparound Care Patients appreciated the comprehensive approach of health coaches, which included not only dietary guidance but also cultural practice and other lifestyle factors. Some noted utilising other programs on offer via the primary healthcare provider which were not related to diet or diabetes but were nevertheless beneficial. Patients also valued that the approach extended to treating other members of their family/household. Healthcare professionals also stressed the need for a holistic care model that prioritizes individual well-being, emphasizing the importance of addressing the comprehensive needs of not just patients but also their families. This approach includes flexibility, adaptability, and the provision of free care in some instances. Additionally, health coaches highlighted the introduction of various health behaviours to enhance well-being beyond diabetes management, with attention to cultural considerations. 3.2.3. Support and Patient Accountability Patients highlighted the significance of accountability and the role of support from both groups and health professionals in maintaining it. They valued the encouragement to take personal responsibility and the opportunity to share progress. The proactive engagement and consistent support from health coaches, along with the ease of communicating updates, feelings, and dietary choices via email, were particularly appreciated. Healthcare professionals acknowledged that behavioural change, particularly related to nutrition and lifestyle, is a significant challenge. They recognised the importance of a supportive environment and stressed the role of peers in making the process easier. They also found that maintaining patient commitment was crucial for success, and regular checkups helped keep patients motivated and accountable for their goals. Nevertheless, the patient's own commitment and willingness to embrace change were noted as key determinants of their success in improving their health. 3.2.4. Improvement of Health Literacy Gaining a greater understanding of their medical condition and nutrition was deemed critical by patients, and many appreciated exploring the science behind diabetes. Regular meetings were seen as particularly beneficial, offering engaging and stimulating discussions with repetitive reinforcement of key information. Several patients discussed how the information they were given acted as a springboard to pursuing their own research, with educational resources like informative videos playing a crucial role in enhancing their knowledge and self-accountability. Healthcare professionals described how patients often leave conventional medical appointments without a full understanding of their medical condition. They emphasised the importance of health coaches in bridging the gap in health literacy and providing patients with the knowledge required to manage their conditions effectively. This was especially important given the low health literacy in many communities, even though multiple family members may be suffering from diabetes. presents supporting quotes aligning with these subthemes. 3.3. Theme 3: Implementation 3.3.1. Barriers Patients expressed various obstacles to accessing care, for example, several were unaware of all the services and support available, including the existence of a Facebook group and weekly meetings. The biggest barrier was the financial aspect of the model, with GP appointments being prohibitively costly in the private system. Healthcare professionals spoke about barriers in relation to the contradiction between the NZ nutrition guidelines and the carbohydrate-reduction approach. This caused tension among clinicians. Additionally, health coaches noted that some patients found greater comfort in dealing with traditional medical professionals over health coaches, partly due to confusion about the distinct roles of health and life coaches. Funding concerns were also prominent, with apprehensions about political influences and inconsistent funding, especially given the proximity to an upcoming election. The need for stable resources for health coach employment was stressed, along with uncertainties surrounding health coaching pay scales compared to other allied health professionals. 3.3.2. Resources Patients appreciated the available resources and support but had specific suggestions to improve the implementation of the healthcare model in other locations. Some desired resources that were tailored to their specific needs and circumstances, highlighting the need for practical tools that could be personalised. Patients envisioned a central repository of resources accommodating diverse lifestyles, including working individuals, those with limited time, families, and various cultural backgrounds. They suggested the creation of a problem-solving forum, professionally produced resources for common issues, and practical aids like posters, tiered guides for sugar alternatives, and cheat sheets for making healthier choices at different eateries. Healthcare professionals echoed the desire for more resources, including materials and visual representations to enhance patient understanding. Providers acknowledged that patients often seek validation and verification, for example, requesting meal plans despite there being a wide range of recipes and meal plans available on the Internet. The potential benefits of continuous glucose monitors (CGMs) were mentioned, with one health coach noting that three-monthly HbA1c testing was too out of reach for some patients as they required more regular feedback on progress. Lack of funding for CGMs was the current constraint to these being made widely available. 3.3.3. Support Structures Patients identified key support structures essential for their engagement in healthcare, including dedicated GPs, group meetings that fostered self-discipline and community, group chats, and health coach check-ins. The importance of family involvement was emphasised, and for those without supportive families, a buddy system was suggested. GPs recognised the significance of health coaches in managing lifestyle medicine clinics. Similar to patients, they also highlighted the role of community support systems, particularly through group meetings and digital platforms. 3.3.4. Patient Safety Patient safety was not frequently spoken about by the patients themselves, but one participant noted their negative experience of stopping all medication abruptly. In contrast, patient safety is a paramount concern to healthcare professionals within the context of implementing a new model of healthcare. GPs particularly emphasised the significance of safety around medication when transitioning patients to a reduced carbohydrate diet, noting that safe deprescribing was essential yet was an often inadequately taught skill. As mentioned previously, some healthcare professionals acknowledged the apprehension among other practitioners regarding the safety of therapeutic carbohydrate reduction, particularly when concerns arise about perceived potential risks like heart attacks. 3.3.5. Potential Improvements and Wider Reach In addition to the previously mentioned resources, patients proposed improved communication about the full range of services and support available when starting the program. Some patients expressed a desire for the inclusion of exercise programs and classes within the program to enhance accountability, although this varied by location, with certain health coaches with fitness backgrounds already incorporating such elements. Healthcare professionals discussed various strategies for expanding the reach of the model of care to a broader audience including exploring opportunities within Māori and Pacific communities and initiatives like community gardens. Additionally, they noted that accommodating different schedules by providing more flexible hours would be beneficial moving forward. Some individuals also noted that they would like to introduce exercise elements to health coaching, concurring with patient feedback. 3.3.6. The Role of Different Professionals Within the Model The involvement of various professionals in the healthcare model was more frequently addressed by healthcare professionals than by patients. Patients' views on the importance of different professionals varied regionally, with some emphasizing health coaches and others relying more on GPs, particularly where GPs led weekly group sessions. A collaborative approach between GPs and health coaches was emphasised. GPs were seen as crucial at the start of a patient's journey, particularly for safe prescribing and deprescribing before transitioning to a reduced carbohydrate approach with health coach guidance. The potential integration of health psychologists and dietitians was also discussed, with the former able to address the psychological aspects of patients' health journeys, while dietitians could assist complex patients with specific dietary needs. However, there was tension noted between health professionals, with dietitians sometimes resisting reduced carbohydrate approaches and concerns about health coaches overlapping with dietitians' traditional roles. presents supporting quotes aligning with these subthemes. 3.4. Theme 4: Empowerment Patients reported feeling empowered through realizing that they could make meaningful changes in their own lives. For many, having choices was crucial, especially when they have previously been told they had no option but to take medication in managing their PD or T2D. Health professionals also spoke about patient choice being key and about the significance of patients going on a journey with a group of others who were working towards shared goals. They also reported having undergone a transformation in their own approach to the practice of medicine. One GP noted a shift from conventional practice to addressing the underlying causes of health issues. presents supporting quotes aligning with these subthemes. 3.5. Theme 5: Sustainability 3.5.1. Health System Change Patients recognised the challenges that could impede the success and sustainability of this approach, particularly in relation to the role of conventional doctors. Most of their concerns were linked with resistance to carbohydrate-reduction approaches and preventative medicine more broadly, as already discussed. Healthcare professionals also recognised the need for greater understanding of the importance and impact of lifestyle and preventive medicine. GPs discussed how the model could address issues such as short GP appointments, GP burnout, and the shortage of GPs. By delegating detailed nutritional and lifestyle guidance to allied health professionals and health coaches, GPs could reduce their workload and manage their time more effectively within the typical 15-min consultation model. Health coaches were considered instrumental to the model's long-term success, and it was suggested that having more health coaches than doctors in the longer term could be desirable. Sustainability was also linked with the need for comprehensive data collection on patient outcomes to help secure future funding for health coaching. Additionally, health coaches discussed strategies to improve their efficiency, such as grouping patients based on specific needs like sleep quality and exercise habits, allowing them to maximize their impact with limited time. 3.5.2. Stakeholder Support Patients emphasised the necessity for enhanced support from the food industry and educational systems for any sustained success of lifestyle medicine to occur. They praised local efforts, like cafes offering reduced carbohydrate options, and advocated for the widespread availability of healthier food choices, especially in areas with lower socioeconomic status. The discussion also touched on the potential for supermarkets to play a significant role in making reduced-carb and low-sugar alternatives more accessible. Healthcare providers also identified the challenges arising from external factors, primarily related to the national and diabetes nutrition guidelines which have a significant impact on clinical practices and public health policies. As noted earlier, there were also some concerns about funding support being withdrawn from health coaching programs when the government changed. presents supporting quotes aligning with these subthemes.
3.1.1. Perceptions of Reduced Carbohydrate Diets Patients transitioning to reduced carbohydrate diets experienced a significant shift in their dietary habits and gained a new understanding of what constitutes healthy food for T2D, challenging conventional beliefs about low-fat diets being the healthier dietary option. This was also noted in relation to the carnivore (meat only) diet, which several patients were following despite them initially having reservations about eating so much meat. 3.1.2. Positive Experiences Patients implementing a reduced carbohydrate diet reported numerous benefits, including significant weight loss without hunger, a healthier, addiction-free relationship with food, and more stable energy levels for both themselves and their families. Many individuals observed substantial improvements in HbA1c levels and were able to reduce or discontinue medication. The diet also had a positive impact on a range of other health conditions, including heart palpitations and polycystic ovarian syndrome. All health coaches emphasised their role in creating a positive patient experience by helping patients find enjoyable foods and make small changes within culturally accepted dietary habits. Healthcare professionals also spoke about positive changes they had observed in patients, including improvements in glycaemic control, body weight, and the extent to which medications were needed. They further believed that the approach should be more widely utilised, and that the health system needed to recognise carbohydrate reduction as an effective method for treating and managing PD and T2D. 3.1.3. Barriers Patients identified multiple barriers to adopting or maintaining a reduced carbohydrate eating approach. Key issues included scepticism, resistance, and lack of up-to-date knowledge from healthcare professionals (particularly GPs and dietitians), and national organisations such as the New Zealand Society for the Study of Diabetes (NZSSD), Diabetes New Zealand, and the National Heart Foundation of New Zealand. Additionally, patients noted a lack of consensus and varying opinions within the low-carbohydrate community, causing confusion about optimal dietary choices. The influence and conflicts of interest from the food and pharmaceutical industries were a significant concern, alongside the cost of lower carbohydrate products. Patients suggested that more support from supermarkets, restaurants, and cafes was needed. Social challenges included negative perceptions from friends and family, with concerns about health impacts and disapproval of dietary choices. Special occasions such as holidays and social gatherings posed particular difficulties, with the temptation of high carbohydrate foods and social pressure to conform. Healthcare professionals also highlighted the reluctance of some GPs to endorse carbohydrate-reduction approaches. The divergence from traditional dietary guidelines was seen as the main reason for this. Health professionals agreed that the carbohydrate-centric focus of both the national dietary guidelines and the diabetes-specific dietary guidelines in NZ were one of the biggest barriers faced, alongside the noted lack of support from GPs and dietitians. presents supporting quotes aligning with these subthemes.
Patients transitioning to reduced carbohydrate diets experienced a significant shift in their dietary habits and gained a new understanding of what constitutes healthy food for T2D, challenging conventional beliefs about low-fat diets being the healthier dietary option. This was also noted in relation to the carnivore (meat only) diet, which several patients were following despite them initially having reservations about eating so much meat.
Patients implementing a reduced carbohydrate diet reported numerous benefits, including significant weight loss without hunger, a healthier, addiction-free relationship with food, and more stable energy levels for both themselves and their families. Many individuals observed substantial improvements in HbA1c levels and were able to reduce or discontinue medication. The diet also had a positive impact on a range of other health conditions, including heart palpitations and polycystic ovarian syndrome. All health coaches emphasised their role in creating a positive patient experience by helping patients find enjoyable foods and make small changes within culturally accepted dietary habits. Healthcare professionals also spoke about positive changes they had observed in patients, including improvements in glycaemic control, body weight, and the extent to which medications were needed. They further believed that the approach should be more widely utilised, and that the health system needed to recognise carbohydrate reduction as an effective method for treating and managing PD and T2D.
Patients identified multiple barriers to adopting or maintaining a reduced carbohydrate eating approach. Key issues included scepticism, resistance, and lack of up-to-date knowledge from healthcare professionals (particularly GPs and dietitians), and national organisations such as the New Zealand Society for the Study of Diabetes (NZSSD), Diabetes New Zealand, and the National Heart Foundation of New Zealand. Additionally, patients noted a lack of consensus and varying opinions within the low-carbohydrate community, causing confusion about optimal dietary choices. The influence and conflicts of interest from the food and pharmaceutical industries were a significant concern, alongside the cost of lower carbohydrate products. Patients suggested that more support from supermarkets, restaurants, and cafes was needed. Social challenges included negative perceptions from friends and family, with concerns about health impacts and disapproval of dietary choices. Special occasions such as holidays and social gatherings posed particular difficulties, with the temptation of high carbohydrate foods and social pressure to conform. Healthcare professionals also highlighted the reluctance of some GPs to endorse carbohydrate-reduction approaches. The divergence from traditional dietary guidelines was seen as the main reason for this. Health professionals agreed that the carbohydrate-centric focus of both the national dietary guidelines and the diabetes-specific dietary guidelines in NZ were one of the biggest barriers faced, alongside the noted lack of support from GPs and dietitians. presents supporting quotes aligning with these subthemes.
3.2.1. Individualised Care That Is Culturally Appropriate Patients with PD and T2D highly valued individualised care, where health coaches tailored their approach to each individual within a general framework. They appreciated the one-on-one attention, problem-solving, attention to detail, and thorough explanations. Patients also noted the genuine interest displayed by health coaches and their consideration of the individual's cultural background. Healthcare professionals stressed that personalising care was essential for success and very different from standard interactions between GPs and patients. Individualising care was felt to increase the accessibility of behavioural change, for example, by creating budget-friendly meals and helping patients to access local food while considering transportation limitations. Similarly, health coaches strongly stressed the importance of cultural appropriateness in healthcare. They further noted that being a health coach, who was reflective of the community they serve, was seen as an advantage. More generally, healthcare professionals highlighted the adaptability of their approach to suit the backgrounds of their patients. 3.2.2. Holistic, Wraparound Care Patients appreciated the comprehensive approach of health coaches, which included not only dietary guidance but also cultural practice and other lifestyle factors. Some noted utilising other programs on offer via the primary healthcare provider which were not related to diet or diabetes but were nevertheless beneficial. Patients also valued that the approach extended to treating other members of their family/household. Healthcare professionals also stressed the need for a holistic care model that prioritizes individual well-being, emphasizing the importance of addressing the comprehensive needs of not just patients but also their families. This approach includes flexibility, adaptability, and the provision of free care in some instances. Additionally, health coaches highlighted the introduction of various health behaviours to enhance well-being beyond diabetes management, with attention to cultural considerations. 3.2.3. Support and Patient Accountability Patients highlighted the significance of accountability and the role of support from both groups and health professionals in maintaining it. They valued the encouragement to take personal responsibility and the opportunity to share progress. The proactive engagement and consistent support from health coaches, along with the ease of communicating updates, feelings, and dietary choices via email, were particularly appreciated. Healthcare professionals acknowledged that behavioural change, particularly related to nutrition and lifestyle, is a significant challenge. They recognised the importance of a supportive environment and stressed the role of peers in making the process easier. They also found that maintaining patient commitment was crucial for success, and regular checkups helped keep patients motivated and accountable for their goals. Nevertheless, the patient's own commitment and willingness to embrace change were noted as key determinants of their success in improving their health. 3.2.4. Improvement of Health Literacy Gaining a greater understanding of their medical condition and nutrition was deemed critical by patients, and many appreciated exploring the science behind diabetes. Regular meetings were seen as particularly beneficial, offering engaging and stimulating discussions with repetitive reinforcement of key information. Several patients discussed how the information they were given acted as a springboard to pursuing their own research, with educational resources like informative videos playing a crucial role in enhancing their knowledge and self-accountability. Healthcare professionals described how patients often leave conventional medical appointments without a full understanding of their medical condition. They emphasised the importance of health coaches in bridging the gap in health literacy and providing patients with the knowledge required to manage their conditions effectively. This was especially important given the low health literacy in many communities, even though multiple family members may be suffering from diabetes. presents supporting quotes aligning with these subthemes.
Patients with PD and T2D highly valued individualised care, where health coaches tailored their approach to each individual within a general framework. They appreciated the one-on-one attention, problem-solving, attention to detail, and thorough explanations. Patients also noted the genuine interest displayed by health coaches and their consideration of the individual's cultural background. Healthcare professionals stressed that personalising care was essential for success and very different from standard interactions between GPs and patients. Individualising care was felt to increase the accessibility of behavioural change, for example, by creating budget-friendly meals and helping patients to access local food while considering transportation limitations. Similarly, health coaches strongly stressed the importance of cultural appropriateness in healthcare. They further noted that being a health coach, who was reflective of the community they serve, was seen as an advantage. More generally, healthcare professionals highlighted the adaptability of their approach to suit the backgrounds of their patients.
Patients appreciated the comprehensive approach of health coaches, which included not only dietary guidance but also cultural practice and other lifestyle factors. Some noted utilising other programs on offer via the primary healthcare provider which were not related to diet or diabetes but were nevertheless beneficial. Patients also valued that the approach extended to treating other members of their family/household. Healthcare professionals also stressed the need for a holistic care model that prioritizes individual well-being, emphasizing the importance of addressing the comprehensive needs of not just patients but also their families. This approach includes flexibility, adaptability, and the provision of free care in some instances. Additionally, health coaches highlighted the introduction of various health behaviours to enhance well-being beyond diabetes management, with attention to cultural considerations.
Patients highlighted the significance of accountability and the role of support from both groups and health professionals in maintaining it. They valued the encouragement to take personal responsibility and the opportunity to share progress. The proactive engagement and consistent support from health coaches, along with the ease of communicating updates, feelings, and dietary choices via email, were particularly appreciated. Healthcare professionals acknowledged that behavioural change, particularly related to nutrition and lifestyle, is a significant challenge. They recognised the importance of a supportive environment and stressed the role of peers in making the process easier. They also found that maintaining patient commitment was crucial for success, and regular checkups helped keep patients motivated and accountable for their goals. Nevertheless, the patient's own commitment and willingness to embrace change were noted as key determinants of their success in improving their health.
Gaining a greater understanding of their medical condition and nutrition was deemed critical by patients, and many appreciated exploring the science behind diabetes. Regular meetings were seen as particularly beneficial, offering engaging and stimulating discussions with repetitive reinforcement of key information. Several patients discussed how the information they were given acted as a springboard to pursuing their own research, with educational resources like informative videos playing a crucial role in enhancing their knowledge and self-accountability. Healthcare professionals described how patients often leave conventional medical appointments without a full understanding of their medical condition. They emphasised the importance of health coaches in bridging the gap in health literacy and providing patients with the knowledge required to manage their conditions effectively. This was especially important given the low health literacy in many communities, even though multiple family members may be suffering from diabetes. presents supporting quotes aligning with these subthemes.
3.3.1. Barriers Patients expressed various obstacles to accessing care, for example, several were unaware of all the services and support available, including the existence of a Facebook group and weekly meetings. The biggest barrier was the financial aspect of the model, with GP appointments being prohibitively costly in the private system. Healthcare professionals spoke about barriers in relation to the contradiction between the NZ nutrition guidelines and the carbohydrate-reduction approach. This caused tension among clinicians. Additionally, health coaches noted that some patients found greater comfort in dealing with traditional medical professionals over health coaches, partly due to confusion about the distinct roles of health and life coaches. Funding concerns were also prominent, with apprehensions about political influences and inconsistent funding, especially given the proximity to an upcoming election. The need for stable resources for health coach employment was stressed, along with uncertainties surrounding health coaching pay scales compared to other allied health professionals. 3.3.2. Resources Patients appreciated the available resources and support but had specific suggestions to improve the implementation of the healthcare model in other locations. Some desired resources that were tailored to their specific needs and circumstances, highlighting the need for practical tools that could be personalised. Patients envisioned a central repository of resources accommodating diverse lifestyles, including working individuals, those with limited time, families, and various cultural backgrounds. They suggested the creation of a problem-solving forum, professionally produced resources for common issues, and practical aids like posters, tiered guides for sugar alternatives, and cheat sheets for making healthier choices at different eateries. Healthcare professionals echoed the desire for more resources, including materials and visual representations to enhance patient understanding. Providers acknowledged that patients often seek validation and verification, for example, requesting meal plans despite there being a wide range of recipes and meal plans available on the Internet. The potential benefits of continuous glucose monitors (CGMs) were mentioned, with one health coach noting that three-monthly HbA1c testing was too out of reach for some patients as they required more regular feedback on progress. Lack of funding for CGMs was the current constraint to these being made widely available. 3.3.3. Support Structures Patients identified key support structures essential for their engagement in healthcare, including dedicated GPs, group meetings that fostered self-discipline and community, group chats, and health coach check-ins. The importance of family involvement was emphasised, and for those without supportive families, a buddy system was suggested. GPs recognised the significance of health coaches in managing lifestyle medicine clinics. Similar to patients, they also highlighted the role of community support systems, particularly through group meetings and digital platforms. 3.3.4. Patient Safety Patient safety was not frequently spoken about by the patients themselves, but one participant noted their negative experience of stopping all medication abruptly. In contrast, patient safety is a paramount concern to healthcare professionals within the context of implementing a new model of healthcare. GPs particularly emphasised the significance of safety around medication when transitioning patients to a reduced carbohydrate diet, noting that safe deprescribing was essential yet was an often inadequately taught skill. As mentioned previously, some healthcare professionals acknowledged the apprehension among other practitioners regarding the safety of therapeutic carbohydrate reduction, particularly when concerns arise about perceived potential risks like heart attacks. 3.3.5. Potential Improvements and Wider Reach In addition to the previously mentioned resources, patients proposed improved communication about the full range of services and support available when starting the program. Some patients expressed a desire for the inclusion of exercise programs and classes within the program to enhance accountability, although this varied by location, with certain health coaches with fitness backgrounds already incorporating such elements. Healthcare professionals discussed various strategies for expanding the reach of the model of care to a broader audience including exploring opportunities within Māori and Pacific communities and initiatives like community gardens. Additionally, they noted that accommodating different schedules by providing more flexible hours would be beneficial moving forward. Some individuals also noted that they would like to introduce exercise elements to health coaching, concurring with patient feedback. 3.3.6. The Role of Different Professionals Within the Model The involvement of various professionals in the healthcare model was more frequently addressed by healthcare professionals than by patients. Patients' views on the importance of different professionals varied regionally, with some emphasizing health coaches and others relying more on GPs, particularly where GPs led weekly group sessions. A collaborative approach between GPs and health coaches was emphasised. GPs were seen as crucial at the start of a patient's journey, particularly for safe prescribing and deprescribing before transitioning to a reduced carbohydrate approach with health coach guidance. The potential integration of health psychologists and dietitians was also discussed, with the former able to address the psychological aspects of patients' health journeys, while dietitians could assist complex patients with specific dietary needs. However, there was tension noted between health professionals, with dietitians sometimes resisting reduced carbohydrate approaches and concerns about health coaches overlapping with dietitians' traditional roles. presents supporting quotes aligning with these subthemes.
Patients expressed various obstacles to accessing care, for example, several were unaware of all the services and support available, including the existence of a Facebook group and weekly meetings. The biggest barrier was the financial aspect of the model, with GP appointments being prohibitively costly in the private system. Healthcare professionals spoke about barriers in relation to the contradiction between the NZ nutrition guidelines and the carbohydrate-reduction approach. This caused tension among clinicians. Additionally, health coaches noted that some patients found greater comfort in dealing with traditional medical professionals over health coaches, partly due to confusion about the distinct roles of health and life coaches. Funding concerns were also prominent, with apprehensions about political influences and inconsistent funding, especially given the proximity to an upcoming election. The need for stable resources for health coach employment was stressed, along with uncertainties surrounding health coaching pay scales compared to other allied health professionals.
Patients appreciated the available resources and support but had specific suggestions to improve the implementation of the healthcare model in other locations. Some desired resources that were tailored to their specific needs and circumstances, highlighting the need for practical tools that could be personalised. Patients envisioned a central repository of resources accommodating diverse lifestyles, including working individuals, those with limited time, families, and various cultural backgrounds. They suggested the creation of a problem-solving forum, professionally produced resources for common issues, and practical aids like posters, tiered guides for sugar alternatives, and cheat sheets for making healthier choices at different eateries. Healthcare professionals echoed the desire for more resources, including materials and visual representations to enhance patient understanding. Providers acknowledged that patients often seek validation and verification, for example, requesting meal plans despite there being a wide range of recipes and meal plans available on the Internet. The potential benefits of continuous glucose monitors (CGMs) were mentioned, with one health coach noting that three-monthly HbA1c testing was too out of reach for some patients as they required more regular feedback on progress. Lack of funding for CGMs was the current constraint to these being made widely available.
Patients identified key support structures essential for their engagement in healthcare, including dedicated GPs, group meetings that fostered self-discipline and community, group chats, and health coach check-ins. The importance of family involvement was emphasised, and for those without supportive families, a buddy system was suggested. GPs recognised the significance of health coaches in managing lifestyle medicine clinics. Similar to patients, they also highlighted the role of community support systems, particularly through group meetings and digital platforms.
Patient safety was not frequently spoken about by the patients themselves, but one participant noted their negative experience of stopping all medication abruptly. In contrast, patient safety is a paramount concern to healthcare professionals within the context of implementing a new model of healthcare. GPs particularly emphasised the significance of safety around medication when transitioning patients to a reduced carbohydrate diet, noting that safe deprescribing was essential yet was an often inadequately taught skill. As mentioned previously, some healthcare professionals acknowledged the apprehension among other practitioners regarding the safety of therapeutic carbohydrate reduction, particularly when concerns arise about perceived potential risks like heart attacks.
In addition to the previously mentioned resources, patients proposed improved communication about the full range of services and support available when starting the program. Some patients expressed a desire for the inclusion of exercise programs and classes within the program to enhance accountability, although this varied by location, with certain health coaches with fitness backgrounds already incorporating such elements. Healthcare professionals discussed various strategies for expanding the reach of the model of care to a broader audience including exploring opportunities within Māori and Pacific communities and initiatives like community gardens. Additionally, they noted that accommodating different schedules by providing more flexible hours would be beneficial moving forward. Some individuals also noted that they would like to introduce exercise elements to health coaching, concurring with patient feedback.
The involvement of various professionals in the healthcare model was more frequently addressed by healthcare professionals than by patients. Patients' views on the importance of different professionals varied regionally, with some emphasizing health coaches and others relying more on GPs, particularly where GPs led weekly group sessions. A collaborative approach between GPs and health coaches was emphasised. GPs were seen as crucial at the start of a patient's journey, particularly for safe prescribing and deprescribing before transitioning to a reduced carbohydrate approach with health coach guidance. The potential integration of health psychologists and dietitians was also discussed, with the former able to address the psychological aspects of patients' health journeys, while dietitians could assist complex patients with specific dietary needs. However, there was tension noted between health professionals, with dietitians sometimes resisting reduced carbohydrate approaches and concerns about health coaches overlapping with dietitians' traditional roles. presents supporting quotes aligning with these subthemes.
Patients reported feeling empowered through realizing that they could make meaningful changes in their own lives. For many, having choices was crucial, especially when they have previously been told they had no option but to take medication in managing their PD or T2D. Health professionals also spoke about patient choice being key and about the significance of patients going on a journey with a group of others who were working towards shared goals. They also reported having undergone a transformation in their own approach to the practice of medicine. One GP noted a shift from conventional practice to addressing the underlying causes of health issues. presents supporting quotes aligning with these subthemes.
3.5.1. Health System Change Patients recognised the challenges that could impede the success and sustainability of this approach, particularly in relation to the role of conventional doctors. Most of their concerns were linked with resistance to carbohydrate-reduction approaches and preventative medicine more broadly, as already discussed. Healthcare professionals also recognised the need for greater understanding of the importance and impact of lifestyle and preventive medicine. GPs discussed how the model could address issues such as short GP appointments, GP burnout, and the shortage of GPs. By delegating detailed nutritional and lifestyle guidance to allied health professionals and health coaches, GPs could reduce their workload and manage their time more effectively within the typical 15-min consultation model. Health coaches were considered instrumental to the model's long-term success, and it was suggested that having more health coaches than doctors in the longer term could be desirable. Sustainability was also linked with the need for comprehensive data collection on patient outcomes to help secure future funding for health coaching. Additionally, health coaches discussed strategies to improve their efficiency, such as grouping patients based on specific needs like sleep quality and exercise habits, allowing them to maximize their impact with limited time. 3.5.2. Stakeholder Support Patients emphasised the necessity for enhanced support from the food industry and educational systems for any sustained success of lifestyle medicine to occur. They praised local efforts, like cafes offering reduced carbohydrate options, and advocated for the widespread availability of healthier food choices, especially in areas with lower socioeconomic status. The discussion also touched on the potential for supermarkets to play a significant role in making reduced-carb and low-sugar alternatives more accessible. Healthcare providers also identified the challenges arising from external factors, primarily related to the national and diabetes nutrition guidelines which have a significant impact on clinical practices and public health policies. As noted earlier, there were also some concerns about funding support being withdrawn from health coaching programs when the government changed. presents supporting quotes aligning with these subthemes.
Patients recognised the challenges that could impede the success and sustainability of this approach, particularly in relation to the role of conventional doctors. Most of their concerns were linked with resistance to carbohydrate-reduction approaches and preventative medicine more broadly, as already discussed. Healthcare professionals also recognised the need for greater understanding of the importance and impact of lifestyle and preventive medicine. GPs discussed how the model could address issues such as short GP appointments, GP burnout, and the shortage of GPs. By delegating detailed nutritional and lifestyle guidance to allied health professionals and health coaches, GPs could reduce their workload and manage their time more effectively within the typical 15-min consultation model. Health coaches were considered instrumental to the model's long-term success, and it was suggested that having more health coaches than doctors in the longer term could be desirable. Sustainability was also linked with the need for comprehensive data collection on patient outcomes to help secure future funding for health coaching. Additionally, health coaches discussed strategies to improve their efficiency, such as grouping patients based on specific needs like sleep quality and exercise habits, allowing them to maximize their impact with limited time.
Patients emphasised the necessity for enhanced support from the food industry and educational systems for any sustained success of lifestyle medicine to occur. They praised local efforts, like cafes offering reduced carbohydrate options, and advocated for the widespread availability of healthier food choices, especially in areas with lower socioeconomic status. The discussion also touched on the potential for supermarkets to play a significant role in making reduced-carb and low-sugar alternatives more accessible. Healthcare providers also identified the challenges arising from external factors, primarily related to the national and diabetes nutrition guidelines which have a significant impact on clinical practices and public health policies. As noted earlier, there were also some concerns about funding support being withdrawn from health coaching programs when the government changed. presents supporting quotes aligning with these subthemes.
The purpose of this study was to explore the experiences of patients with PD or T2D, as well as health professionals involved in a holistic model of care based on whole-food carbohydrate reduction. While there were some variations in views between patients and healthcare professionals, there was a consensus on the success of the model and the reasons why it was felt to work. Key findings included significant health improvements reported by patients, including weight loss, better glycaemic control, and increased energy levels. Health coaching emerged as a critical component, facilitating regular, personalised interactions that contributed to patient empowerment and autonomy. Barriers included resistance from some medical professionals and public perceptions about carbohydrate reduction, as well as financial barriers affecting access to healthy food options and GP appointments. Despite these barriers, this holistic model shows promise for managing and potentially reversing PD and T2D, advocating for a shift in healthcare practice towards lifestyle medicine delivered via a health coaching approach. 4.1. Reduced Carbohydrate Approaches Despite the overwhelmingly positive experiences and outcomes of participants using a therapeutic carbohydrate-reduction approach both in this study and more broadly , the approach is still met with widespread scepticism and resistance from health professionals in NZ , including GPs and dietitians. Accordingly, there is a critical need for more widespread education and awareness around the growing literature on the efficacy of therapeutic carbohydrate reduction for diabetes management . The ostensible resistance is evidently owing to concerns about the high saturated fat content of some reduced carbohydrate diets, the potential impact on LDL cholesterol, and, consequentially, cardiovascular health. Recent debates have challenged the traditional view that saturated fat increases the risk of cardiovascular disease . While uncertainty remains, there is a need for nuanced distinctions between different types of saturated fats and their diverse food sources, with a greater focus on overall dietary patterns . It is further important to interpret lipid markers in the context of overall health rather than in isolation. Evidence suggests that within the context of a reduced carbohydrate diet replete in fibre, vitamins, and minerals and low in ultraprocessed foods, blood lipid markers may in fact undergo positive changes including over long time frames , particularly in individuals who are overweight or obese . This should be an area of focus in helping patients with PD and T2D. Another point of resistance asserted was the lack of formal endorsement by NZSSD. While international consensus guidelines in many countries/regions including the United States, Canada, Europe, the United Kingdom, and Australia now endorse carbohydrate reduction as a legitimate option for the management of T2D , there remains considerable work to do in NZ to bring about a similar shift in perspective . It is likely that some resistance to this dietary approach will persist until concerns about lipid profiles and official guidelines are addressed or resolved. For patients in the present study, however, in the face of such challenges, good adherence to this eating approach was maintained. This was facilitated by their experience of improved health outcomes, alongside strong support structures, including health coaches, peer groups, and family involvement, which have been recognised for their effectiveness in diabetes management . When comparing our results to global trends in diabetes care, carbohydrate-reduction models have been integrated into healthcare systems with varying levels of success. In the United States, the Virta Health initiative has shown significant long-term improvements in glycaemic control and medication reduction through a continuous remote care model . The use of frequent digital monitoring tools, including CGMs, allows for real-time feedback and higher patient adherence. However, Virta Health's status as a private entity allows for greater accessibility to these advanced technologies. In contrast, within NZ's government-subsidised healthcare system, limited access to CGMs and other monitoring technologies may hinder the immediacy of patient feedback and overall success. Nevertheless, regular check-ins with health coaches can go some way to bridging this gap. Similarly, in the United Kingdom, Dr David Unwin's approach within the National Health Service (NHS) has demonstrated success in helping patients achieve drug-free remission of T2D through a low-carbohydrate dietary intervention . Unwin's model emphasizes patient education and the use of visual aids to illustrate carbohydrate's effects on blood glucose, which has improved patient outcomes significantly. Similar to our findings, regular check-ins and phone calls to motivate patients if they start to deviate from their dietary goals are instrumental and can rapidly get patients back on track . Despite these successes, resistance to or refusal to recommend reduced carbohydrate diets remains relatively widespread . Previous studies examining patient experiences of managing T2D with reduced carbohydrate diets have also found that they frequently report scepticism resistance and a lack of knowledge from their GPs and other health professionals . Much like in the present study, patients report that they previously experienced being automatically placed on medication, rather than being given the guidance needed to reverse their diabetes via diet and lifestyle. Despite this, both patient and practitioner experiences of reduced carbohydrate diets in the context of diabetes management have been extremely positive . As in the current study, practitioners have previously reported that they were finally able to change patients' lives after years of failure to do so when only recommending the conventional higher carbohydrate, low-fat diets . 4.2. Health Coaching and Sustainability of the Model Both patients and health professionals were universally positive about the health coaching approach studied in the two models of care. The effectiveness of health coaching in improving outcomes for patients with various chronic diseases has previously been documented . With emphasis on personalised, regular interactions that foster accountability, reduce anxiety, and promote sustained self-care, health coaching can bring about behaviour change in a way that standard models of care rarely achieve . A shift from a passive acceptance of pharmacological interventions to actively making lifestyle choices is facilitated when healthcare providers support patient autonomy, leading to increased patient self-confidence . This highlights the importance of a patient-centred approach in healthcare . In this regard, giving patients sufficient knowledge to be able to make informed choices is critical, which is difficult in standard 15-min GP appointments and likely explains the rarity of sustained behaviour change in the conventional model. Integrating more health coaches into the existing model of care could manage these time-intensive aspects of patient care, significantly lessening the burden on GPs while still providing this personalised support. This need for more nonclinical staff such as health coaches has been highlighted in other recent work in an NZ context, with clinical staff expressing that a greater number of nonclinicians could ease the T2D workload burden . This shift could move patients from being life-long dependents on medication to individuals who are able to reverse chronic conditions and regain their health. Similarly, this transition redefines the role of GPs from being prescribers of medication to facilitators of holistic health and well-being, reducing workload in the longer term. Given the figures around GP burnout and retirement , this is significant. The approach, therefore, has the potential to optimize healthcare delivery in a system currently under significant stress, while mitigating the economic burden on the health system by decreasing long-term healthcare costs and medication use. Although there are challenges in scaling this model nationwide, particularly in terms of health coach funding and integration, progress has already been made. Notably, since the focus groups were conducted, funding for health coaches in NZ has improved, and, as seen in our engagement with multiple primary care settings across NZ, they are becoming better integrated into the healthcare system. This positive trend is encouraging, as it reflects growing recognition of the role of health coaches in supporting sustained behaviour change and improving patient outcomes, allowing more patients to benefit from this approach. It is essential, however, that training sufficiently equips health coaches with the behaviour change skills and relevant nutrition knowledge required to be effective. This is unfortunately still inconsistent in NZ and is an area for development. Collection of data and dissemination of results related to the success of health coaching approaches may further be helpful in securing more funding moving forward. In the long term, financial implications for the healthcare system are likely to be positive. Preventing the progression from PD to T2D, as well as reducing disease-related complications among those already diagnosed with T2D, could lead to significant cost savings. Although no formal financial modelling has been done, it is reasonable to assume that fewer patients with advanced T2D would alleviate some of the burdens on the healthcare system. This has been demonstrated at Norwood Surgery in the United Kingdom, where Dr Unwin's practice reported significant savings of public health funds due to a reduction in diabetes medications through the use of a low-carbohydrate dietary approach . Furthermore, if this approach proves more accessible or palatable to underserved populations such as Māori and Pasifika, it could have broader implications for improving health outcomes for these groups. 4.3. Future Improvements in the Implementation of the Model In addition to the health system changes noted above involving reorientation of the workforce to include more health coaches, updated guidelines, and greater acceptance of reduced carbohydrate approaches, our study highlights other areas for improvement in order for the model to be implemented at broad scales. Firstly, creating resources to address the varied needs of patients managing chronic conditions is crucial. In an era saturated with information on the internet, establishing reliability becomes challenging, and patients may frequently place greater value on centrally provided resources and information . In response to the feedback from focus groups, we have developed extensive resources to support patients and health professionals, including a dedicated website offering recipes, practical tips for reducing carbohydrates on a budget, guidance for those with limited time, culturally tailored advice, and relevant scientific literature. While the website is associated with our implementation science research, it is free and accessible to anyone wishing to learn more about reduced carbohydrate eating patterns . Second, there is a great need for culturally safe and inclusive healthcare solutions that reach underserved communities who face health disparity . In an NZ context, this includes engagement with Māori and Pacific communities to identify champions within their own communities to lead lifestyle changes. In agreement with previous research , we found that engagement and understanding improved when health coaches or other health professionals came from similar ethnic backgrounds. The model's alignment with Te Whare Tapa Whā, a holistic Māori model of health, further supports its cultural relevance . This health framework encompasses four dimensions: taha tinana (physical health), taha hinengaro (mental health), taha wairua (spiritual health), and taha whānau (family health), and allows for a culturally resonant approach that addresses the values and needs of Māori populations. More broadly, tailoring health coaching practices to different cultural contexts would enhance the practical applicability of this model across diverse patient populations. For example, health coaching approaches could be adapted to reflect the cultural values, dietary traditions, and communication styles of various ethnic and immigrant groups, ensuring that advice is relevant and accessible. In cultures where family and community play a central role in decision-making, health coaches could engage family members in lifestyle changes to improve adherence and outcomes, as was evident in the present study. Additionally, offering health education materials and resources in multiple languages and using culturally appropriate metaphors or examples would further support inclusivity. Developing cultural competency training for health coaches would ensure that they are equipped to address the unique needs of different populations, making the model adaptable and effective across a wide range of healthcare settings. Finally, there is an evident need for a shift in the role of dietitians. Collaborative efforts between dietitians and health coaches could bridge gaps in health literacy and cultural sensitivity, with each profession bringing different strengths. This partnership could enhance chronic disease management by ensuring consistent, evidence-based dietary guidance. However, the resistance within NZ's dietetic profession towards adopting carbohydrate-reduction approaches, despite evidence, suggests a need for a paradigm shift in both training and guidelines to fully embrace this model in PD and T2D management. 4.4. Overcoming Barriers In addition to the challenges discussed above, this study identified other significant barriers to the adoption of a reduced carbohydrate model and health coaching approach. Financial constraints were frequently noted by patients, including the high cost of lower carbohydrate food options and the expense of GP appointments. While the website we developed offers practical tips for reducing carbohydrates on a budget, larger scale policy interventions such as subsidies for healthier food and more widespread affordable healthcare services are still needed to facilitate broader adoption of the model. These challenges are not specific to reduced carbohydrate approaches, and patients following alternative diabetes diets have also noted difficulties in accessing healthy food options . In accordance with previous research , social factors also posed challenges for patients, as several reported feeling social pressure from family and friends who were sceptical of the health impacts of carbohydrate reduction. As with patients on a variety of other diabetes diets , special occasions, such as holidays and gatherings, presented specific difficulties, with patients feeling obligated to consume high-carbohydrate “treat” foods in order to conform. To help patients navigate these challenges, structured support strategies could be implemented, including role-playing social scenarios in health coaching sessions to help patients practice declining food or explaining their dietary choices. We have also developed specific resources, such as a “cheat sheet” that provides practical guidance on the most and least favourable reduced-carbohydrate options at various takeaway outlets across NZ including bakeries, Chinese, Italian, and fast-food restaurants—which can empower patients to make informed choices even in social or convenience-driven situations. Finally, the holistic nature of this healthcare model, which aligns with the principles of Te Whare Tapa Whā and incorporates family and community in the lifestyle change process, may in itself reduce scepticism and help overcome barriers to its adoption, particularly as positive health outcomes are achieved. 4.5. Study Strengths and Weaknesses One of the primary strengths of this study lies in its comprehensive exploration of a novel model of care for diabetes management, particularly highlighting the integration of health coaching and a reduced carbohydrate eating approach. This research adds significant value to the field by focusing on patients with both diagnosed PD and T2D, the former being a group that is often underrepresented in diabetes research and has traditionally been underserviced within the healthcare system . The inclusion of diverse perspectives from both patients and healthcare professionals (doctors and health coaches) enriches the study's findings, offering a well-rounded understanding of the model's impact. Further, patients and practitioners from a range of cultural and ethnic backgrounds were included, with patients covering a wide range of ages. While the patient sample was small relative to the entire patient population that has experienced this model of care, we achieved saturation in data collection from the focus groups, ensuring comprehensive coverage of information. The study is not without limitations. The self-selecting nature of the participants who volunteered for the focus groups has the potential to introduce a selection bias. Participants who are more positively inclined towards the model of care, depending on their success, might have been more motivated to participate, potentially skewing the data. However, it was noted that not all patients had experienced success stories with their diabetes outcomes, indicating that patients were able to separate their experiences of the model of care from their treatment outcome. This was encouraging as it indicated a more authentic critique of the model independent of the influence of outcomes. It also allowed for insights into how the model of care adapted to situations of patient failures. Moreover, even when previous studies have tried to recruit practitioners with negative experiences of delivering reduced carbohydrate guidance, they have found overwhelmingly positive experiences, suggesting this may not reflect a biased sample but rather be representative of widespread experiences . Nevertheless, patients included in the study were those who made a choice to manage their PD or T2D via diet and lifestyle measures. As such, they likely represent a subset of patients who are highly motivated to make changes. This reflects the real-world scenario, where individuals who opt for lifestyle interventions are typically those already inclined towards making such changes. Consequently, the focus on motivated patients aligns with the population to whom this model would realistically apply. While randomised controlled trials have demonstrated the efficacy of reduced carbohydrate approaches under controlled conditions, our study examines the real-world effectiveness of this model. As such, the absence of less motivated individuals does not detract from the relevance of these findings for the target group that is open to lifestyle-based management. 4.6. Research Implications and Future Research The findings of this study have significant implications for the healthcare system in NZ and elsewhere, particularly in the context of the challenges faced by high numbers of GPs nearing retirement . Diabetes poses a growing global challenge, with over 500 million individuals affected as of 2021, and projections indicate that this will rise to over 1.3 billion by 2050 . This highlights the urgent need for innovative approaches to address the escalating burden of diabetes, particularly given the lack of progress in diabetes reversal in mainstream healthcare delivery. A system and health professional role reorientation towards a more holistic model, incorporating established community support structures and grounded in behaviour change principles, may be timely. In response to the growing need for evidence on the long-term impact of this approach, future research will focus on expanding the scope of this model across more primary care clinics in NZ. We are currently undertaking a wider study that will examine the transition to reduced carbohydrate approaches combined with health coaching in a larger and more diverse set of clinical settings . This work will involve collecting comprehensive data on patient outcomes, including glycaemic control, weight management, and medication reduction, to evaluate the long-term sustainability and scalability of the model. These data will also provide insights into the success factors and barriers within real-world settings, further informing the integration of this model into the healthcare system.
Despite the overwhelmingly positive experiences and outcomes of participants using a therapeutic carbohydrate-reduction approach both in this study and more broadly , the approach is still met with widespread scepticism and resistance from health professionals in NZ , including GPs and dietitians. Accordingly, there is a critical need for more widespread education and awareness around the growing literature on the efficacy of therapeutic carbohydrate reduction for diabetes management . The ostensible resistance is evidently owing to concerns about the high saturated fat content of some reduced carbohydrate diets, the potential impact on LDL cholesterol, and, consequentially, cardiovascular health. Recent debates have challenged the traditional view that saturated fat increases the risk of cardiovascular disease . While uncertainty remains, there is a need for nuanced distinctions between different types of saturated fats and their diverse food sources, with a greater focus on overall dietary patterns . It is further important to interpret lipid markers in the context of overall health rather than in isolation. Evidence suggests that within the context of a reduced carbohydrate diet replete in fibre, vitamins, and minerals and low in ultraprocessed foods, blood lipid markers may in fact undergo positive changes including over long time frames , particularly in individuals who are overweight or obese . This should be an area of focus in helping patients with PD and T2D. Another point of resistance asserted was the lack of formal endorsement by NZSSD. While international consensus guidelines in many countries/regions including the United States, Canada, Europe, the United Kingdom, and Australia now endorse carbohydrate reduction as a legitimate option for the management of T2D , there remains considerable work to do in NZ to bring about a similar shift in perspective . It is likely that some resistance to this dietary approach will persist until concerns about lipid profiles and official guidelines are addressed or resolved. For patients in the present study, however, in the face of such challenges, good adherence to this eating approach was maintained. This was facilitated by their experience of improved health outcomes, alongside strong support structures, including health coaches, peer groups, and family involvement, which have been recognised for their effectiveness in diabetes management . When comparing our results to global trends in diabetes care, carbohydrate-reduction models have been integrated into healthcare systems with varying levels of success. In the United States, the Virta Health initiative has shown significant long-term improvements in glycaemic control and medication reduction through a continuous remote care model . The use of frequent digital monitoring tools, including CGMs, allows for real-time feedback and higher patient adherence. However, Virta Health's status as a private entity allows for greater accessibility to these advanced technologies. In contrast, within NZ's government-subsidised healthcare system, limited access to CGMs and other monitoring technologies may hinder the immediacy of patient feedback and overall success. Nevertheless, regular check-ins with health coaches can go some way to bridging this gap. Similarly, in the United Kingdom, Dr David Unwin's approach within the National Health Service (NHS) has demonstrated success in helping patients achieve drug-free remission of T2D through a low-carbohydrate dietary intervention . Unwin's model emphasizes patient education and the use of visual aids to illustrate carbohydrate's effects on blood glucose, which has improved patient outcomes significantly. Similar to our findings, regular check-ins and phone calls to motivate patients if they start to deviate from their dietary goals are instrumental and can rapidly get patients back on track . Despite these successes, resistance to or refusal to recommend reduced carbohydrate diets remains relatively widespread . Previous studies examining patient experiences of managing T2D with reduced carbohydrate diets have also found that they frequently report scepticism resistance and a lack of knowledge from their GPs and other health professionals . Much like in the present study, patients report that they previously experienced being automatically placed on medication, rather than being given the guidance needed to reverse their diabetes via diet and lifestyle. Despite this, both patient and practitioner experiences of reduced carbohydrate diets in the context of diabetes management have been extremely positive . As in the current study, practitioners have previously reported that they were finally able to change patients' lives after years of failure to do so when only recommending the conventional higher carbohydrate, low-fat diets .
Both patients and health professionals were universally positive about the health coaching approach studied in the two models of care. The effectiveness of health coaching in improving outcomes for patients with various chronic diseases has previously been documented . With emphasis on personalised, regular interactions that foster accountability, reduce anxiety, and promote sustained self-care, health coaching can bring about behaviour change in a way that standard models of care rarely achieve . A shift from a passive acceptance of pharmacological interventions to actively making lifestyle choices is facilitated when healthcare providers support patient autonomy, leading to increased patient self-confidence . This highlights the importance of a patient-centred approach in healthcare . In this regard, giving patients sufficient knowledge to be able to make informed choices is critical, which is difficult in standard 15-min GP appointments and likely explains the rarity of sustained behaviour change in the conventional model. Integrating more health coaches into the existing model of care could manage these time-intensive aspects of patient care, significantly lessening the burden on GPs while still providing this personalised support. This need for more nonclinical staff such as health coaches has been highlighted in other recent work in an NZ context, with clinical staff expressing that a greater number of nonclinicians could ease the T2D workload burden . This shift could move patients from being life-long dependents on medication to individuals who are able to reverse chronic conditions and regain their health. Similarly, this transition redefines the role of GPs from being prescribers of medication to facilitators of holistic health and well-being, reducing workload in the longer term. Given the figures around GP burnout and retirement , this is significant. The approach, therefore, has the potential to optimize healthcare delivery in a system currently under significant stress, while mitigating the economic burden on the health system by decreasing long-term healthcare costs and medication use. Although there are challenges in scaling this model nationwide, particularly in terms of health coach funding and integration, progress has already been made. Notably, since the focus groups were conducted, funding for health coaches in NZ has improved, and, as seen in our engagement with multiple primary care settings across NZ, they are becoming better integrated into the healthcare system. This positive trend is encouraging, as it reflects growing recognition of the role of health coaches in supporting sustained behaviour change and improving patient outcomes, allowing more patients to benefit from this approach. It is essential, however, that training sufficiently equips health coaches with the behaviour change skills and relevant nutrition knowledge required to be effective. This is unfortunately still inconsistent in NZ and is an area for development. Collection of data and dissemination of results related to the success of health coaching approaches may further be helpful in securing more funding moving forward. In the long term, financial implications for the healthcare system are likely to be positive. Preventing the progression from PD to T2D, as well as reducing disease-related complications among those already diagnosed with T2D, could lead to significant cost savings. Although no formal financial modelling has been done, it is reasonable to assume that fewer patients with advanced T2D would alleviate some of the burdens on the healthcare system. This has been demonstrated at Norwood Surgery in the United Kingdom, where Dr Unwin's practice reported significant savings of public health funds due to a reduction in diabetes medications through the use of a low-carbohydrate dietary approach . Furthermore, if this approach proves more accessible or palatable to underserved populations such as Māori and Pasifika, it could have broader implications for improving health outcomes for these groups.
In addition to the health system changes noted above involving reorientation of the workforce to include more health coaches, updated guidelines, and greater acceptance of reduced carbohydrate approaches, our study highlights other areas for improvement in order for the model to be implemented at broad scales. Firstly, creating resources to address the varied needs of patients managing chronic conditions is crucial. In an era saturated with information on the internet, establishing reliability becomes challenging, and patients may frequently place greater value on centrally provided resources and information . In response to the feedback from focus groups, we have developed extensive resources to support patients and health professionals, including a dedicated website offering recipes, practical tips for reducing carbohydrates on a budget, guidance for those with limited time, culturally tailored advice, and relevant scientific literature. While the website is associated with our implementation science research, it is free and accessible to anyone wishing to learn more about reduced carbohydrate eating patterns . Second, there is a great need for culturally safe and inclusive healthcare solutions that reach underserved communities who face health disparity . In an NZ context, this includes engagement with Māori and Pacific communities to identify champions within their own communities to lead lifestyle changes. In agreement with previous research , we found that engagement and understanding improved when health coaches or other health professionals came from similar ethnic backgrounds. The model's alignment with Te Whare Tapa Whā, a holistic Māori model of health, further supports its cultural relevance . This health framework encompasses four dimensions: taha tinana (physical health), taha hinengaro (mental health), taha wairua (spiritual health), and taha whānau (family health), and allows for a culturally resonant approach that addresses the values and needs of Māori populations. More broadly, tailoring health coaching practices to different cultural contexts would enhance the practical applicability of this model across diverse patient populations. For example, health coaching approaches could be adapted to reflect the cultural values, dietary traditions, and communication styles of various ethnic and immigrant groups, ensuring that advice is relevant and accessible. In cultures where family and community play a central role in decision-making, health coaches could engage family members in lifestyle changes to improve adherence and outcomes, as was evident in the present study. Additionally, offering health education materials and resources in multiple languages and using culturally appropriate metaphors or examples would further support inclusivity. Developing cultural competency training for health coaches would ensure that they are equipped to address the unique needs of different populations, making the model adaptable and effective across a wide range of healthcare settings. Finally, there is an evident need for a shift in the role of dietitians. Collaborative efforts between dietitians and health coaches could bridge gaps in health literacy and cultural sensitivity, with each profession bringing different strengths. This partnership could enhance chronic disease management by ensuring consistent, evidence-based dietary guidance. However, the resistance within NZ's dietetic profession towards adopting carbohydrate-reduction approaches, despite evidence, suggests a need for a paradigm shift in both training and guidelines to fully embrace this model in PD and T2D management.
In addition to the challenges discussed above, this study identified other significant barriers to the adoption of a reduced carbohydrate model and health coaching approach. Financial constraints were frequently noted by patients, including the high cost of lower carbohydrate food options and the expense of GP appointments. While the website we developed offers practical tips for reducing carbohydrates on a budget, larger scale policy interventions such as subsidies for healthier food and more widespread affordable healthcare services are still needed to facilitate broader adoption of the model. These challenges are not specific to reduced carbohydrate approaches, and patients following alternative diabetes diets have also noted difficulties in accessing healthy food options . In accordance with previous research , social factors also posed challenges for patients, as several reported feeling social pressure from family and friends who were sceptical of the health impacts of carbohydrate reduction. As with patients on a variety of other diabetes diets , special occasions, such as holidays and gatherings, presented specific difficulties, with patients feeling obligated to consume high-carbohydrate “treat” foods in order to conform. To help patients navigate these challenges, structured support strategies could be implemented, including role-playing social scenarios in health coaching sessions to help patients practice declining food or explaining their dietary choices. We have also developed specific resources, such as a “cheat sheet” that provides practical guidance on the most and least favourable reduced-carbohydrate options at various takeaway outlets across NZ including bakeries, Chinese, Italian, and fast-food restaurants—which can empower patients to make informed choices even in social or convenience-driven situations. Finally, the holistic nature of this healthcare model, which aligns with the principles of Te Whare Tapa Whā and incorporates family and community in the lifestyle change process, may in itself reduce scepticism and help overcome barriers to its adoption, particularly as positive health outcomes are achieved.
One of the primary strengths of this study lies in its comprehensive exploration of a novel model of care for diabetes management, particularly highlighting the integration of health coaching and a reduced carbohydrate eating approach. This research adds significant value to the field by focusing on patients with both diagnosed PD and T2D, the former being a group that is often underrepresented in diabetes research and has traditionally been underserviced within the healthcare system . The inclusion of diverse perspectives from both patients and healthcare professionals (doctors and health coaches) enriches the study's findings, offering a well-rounded understanding of the model's impact. Further, patients and practitioners from a range of cultural and ethnic backgrounds were included, with patients covering a wide range of ages. While the patient sample was small relative to the entire patient population that has experienced this model of care, we achieved saturation in data collection from the focus groups, ensuring comprehensive coverage of information. The study is not without limitations. The self-selecting nature of the participants who volunteered for the focus groups has the potential to introduce a selection bias. Participants who are more positively inclined towards the model of care, depending on their success, might have been more motivated to participate, potentially skewing the data. However, it was noted that not all patients had experienced success stories with their diabetes outcomes, indicating that patients were able to separate their experiences of the model of care from their treatment outcome. This was encouraging as it indicated a more authentic critique of the model independent of the influence of outcomes. It also allowed for insights into how the model of care adapted to situations of patient failures. Moreover, even when previous studies have tried to recruit practitioners with negative experiences of delivering reduced carbohydrate guidance, they have found overwhelmingly positive experiences, suggesting this may not reflect a biased sample but rather be representative of widespread experiences . Nevertheless, patients included in the study were those who made a choice to manage their PD or T2D via diet and lifestyle measures. As such, they likely represent a subset of patients who are highly motivated to make changes. This reflects the real-world scenario, where individuals who opt for lifestyle interventions are typically those already inclined towards making such changes. Consequently, the focus on motivated patients aligns with the population to whom this model would realistically apply. While randomised controlled trials have demonstrated the efficacy of reduced carbohydrate approaches under controlled conditions, our study examines the real-world effectiveness of this model. As such, the absence of less motivated individuals does not detract from the relevance of these findings for the target group that is open to lifestyle-based management.
The findings of this study have significant implications for the healthcare system in NZ and elsewhere, particularly in the context of the challenges faced by high numbers of GPs nearing retirement . Diabetes poses a growing global challenge, with over 500 million individuals affected as of 2021, and projections indicate that this will rise to over 1.3 billion by 2050 . This highlights the urgent need for innovative approaches to address the escalating burden of diabetes, particularly given the lack of progress in diabetes reversal in mainstream healthcare delivery. A system and health professional role reorientation towards a more holistic model, incorporating established community support structures and grounded in behaviour change principles, may be timely. In response to the growing need for evidence on the long-term impact of this approach, future research will focus on expanding the scope of this model across more primary care clinics in NZ. We are currently undertaking a wider study that will examine the transition to reduced carbohydrate approaches combined with health coaching in a larger and more diverse set of clinical settings . This work will involve collecting comprehensive data on patient outcomes, including glycaemic control, weight management, and medication reduction, to evaluate the long-term sustainability and scalability of the model. These data will also provide insights into the success factors and barriers within real-world settings, further informing the integration of this model into the healthcare system.
In summary, this study identifies both the promises and challenges associated with the healthcare model under investigation. Adopting a carbohydrate-reduction approach delivered significant health benefits for patients but was met with barriers, including resistance from healthcare professionals and social perceptions. Health coaching proved to be an invaluable component of the model, offering individualised care and support based on the framework of behaviour change and cultural responsiveness, while also addressing the need for increased health literacy. To ensure sustainable implementation, the model requires enhanced education for healthcare professionals, comprehensive data on patient outcomes, and increased public awareness. Importantly, this study demonstrates the potential for a paradigm shift from a pharmaceutical-based system to one prioritising lifestyle medicine, empowering patients to regain control over their health and reducing the burden on primary care systems managing lifestyle-related conditions like T2D. Future research should explore scaling this model and evaluating its long-term impacts.
|
Social Media Use for Health Communication by the CDC in Mainland China: National Survey Study 2009-2020 | 5a01af9b-7fa1-4c87-ab87-b2f9a0f19756 | 7744262 | Health Communication[mh] | One of the keys to dealing with public health emergencies is timely and effective risk communication. Through a dialogue between the government and the public, we can minimize the information asymmetry between the government and the public, and help the public to take preventive measures quickly . With the development of information and communication technology, social media, with the characteristics of participation, openness, dialogue, and other tools, has brought unprecedented opportunities for improving communication between the government and the public . Thus, social media can improve the ability of government management to strengthen communication between the government and the public, and to realize an open government . For example, previous studies have shown that the use of social media by the government can promote cross-sectoral information sharing , increase government transparency , promote public political participation , and help the public to respond to disasters . Public health authorities are also increasingly using social media for information disclosure and risk communication. Previous studies have analyzed the main factors of public health authorities’ willingness to adopt social media, such as organizational size and geographical location . Studies have also shown that traditional media is used by public health authorities to respond to general health problems, while social media is used to respond to public health emergencies , and this role of responding to public health emergencies has been confirmed by other studies . For example, analyzing the number of tweets and the degree of public participation can effectively predict the actual dynamics of an epidemic . In addition, existing research also discusses the role of social media use by public health authorities in advancing health reform and building an open government . In general, existing studies have explored the influencing factors, preferences, and significance of social media use, but they have paid less attention to how public health authorities use social media to communicate with the public. In particular, these studies have rarely investigated and evaluated the effects of communication . Moreover, in the Western context, China is regarded as an authoritarian country that differs from the Western-style democratic system. Sudden public health incidents occur frequently in China and have an impact on China’s economic development and social stability. Previous studies have attributed these effects to China’s administrative system. These studies have noted that China’s emergency decision making is often guided by a top-down command and control system, and that information transmission follows a layer-by-layer linear model. As a result, the public feedback channel is not smooth, and the interaction between the government and the public is limited . For example, during the severe acute respiratory syndrome incident, there were delays in decision making caused by poor communication, which led to the spread of public panic and threatened public health and safety . This situation has also occurred in the fight against COVID-19. Therefore, identifying ways to improve communication between the government and the public, and promoting timely and effective risk communication is key for China to deal with sudden public health incidents. However, there are few studies that have examined the current situation and the interaction of social media use by Chinese public health authorities from an empirical perspective. The purpose of this study is to examine how Chinese public health authorities use social media to improve communication between the government and the public. This study analyzes the government affairs Weibo information of the Center for Disease Control and Prevention (CDC) at the provincial and prefectural levels in mainland China; describes the current situation of the adoption, operation, and communication of government Weibo accounts; discusses whether the use of social media by Chinese public health authorities has improved health communication between the government and the public; and discusses what factors help to promote communication between the government and the public. Specifically, this study aims to answer the following questions: Is it common for the CDC at the provincial and prefectural levels in mainland China to use government Weibo accounts? What is the current status of government Weibo use by the CDC at the provincial and prefectural levels in mainland China? How does the CDC at the provincial and prefectural levels in mainland China communicate with the public on government Weibo accounts?
Study Sample The CDC is a state-funded bureau under the leadership of the National Health Commission of China that specializes in disease control and prevention, and public health. According to the official website, its mission is to create a safe and healthy environment, maintain social stability, ensure national security and promote the health of people through the prevention and control of diseases, injuries, and disabilities. There are corresponding institutions from the central government to the local government. Sina Weibo is one of the most popular social media sites in China and is similar to Twitter. As of December 2019, there were 139,000 government agencies with registered Weibo accounts on the platform, according to the China Internet Network Information Center 45th Statistical Report on the Development of the Internet in China . Because Weibo is highly influential, government agencies set up government Weibo accounts on the Sina Weibo platform. A total of 134 sample accounts were collected. The sample collection process was as follows. First, through the “find-search-user” function on the Sina Weibo client, we conducted searches with “xx province + CDC” (“xx省+疾病预防控制中心”), “xx City + CDC” (“xx市+疾病预防控制中心”), “Centers for Disease Control and Prevention” (“疾控中心”), and “disease control and prevention” (“疾控”) as the keywords for the search. For example, “Jiangsu Centers for Disease Control and Prevention” (“江苏省疾病预防控制中心”), “Nanjing Centers for Disease Control and Prevention” (“南京市疾病预防控制中心”), and “Chongqing Disease Control and Prevention” (“重庆疾控”) are Weibo accounts that meet the requirements. Second, we visited the CDC’s official website and used the Weibo account announced by the CDC on its official website as a sample. The samples were collected from March 21, 2019, to March 30, 2019. To understand the use of the CDC’s government Weibo accounts during COVID-19, this study also observed the data for these 134 sample accounts from January 1 to June 30, 2020. To ensure that there were no omissions, the collection process was jointly undertaken by a teacher and two trained master’s degree students, and the collection results of the three were found to be completely consistent. Data Collection Data collection for this study was mainly done using the Sina Weibo webpage. shows the Sina Weibo webpage of a CDC facility in China. We collected the latest 10 Weibo tweets posted by these 134 CDC government Weibo accounts before midnight on March 30, 2019. Considering that some Weibo accounts had posted less than 10 tweets or no tweets since their registration, we could not collect all 1340 tweets. Therefore, this study collected a total of 1215 Weibo tweets. The data collected mainly included topic type, content form, degree of originality, reply rate of comments, and reply time of comments. In addition, this study collected supplemental relevant data on the overall performances of the CDC’s government Weibo accounts in the pandemic from January 1 to June 30, 2020. In this study, the adoption and operation of the CDC’s government Weibo accounts were also included in the survey. In the survey on the adoption of government Weibo accounts, the account registration time was collected. In the survey on the operation of government Weibo accounts, the data collected included the following: whether they received official certification from Sina Weibo, the update time for the most recent Weibo post, and the number of followers. Data Analysis As China’s regional economy has shown uneven development in the eastern, central, and western regions , which may have an impact on the development of the CDC’s government Weibo accounts, this study is based on the division of the eastern, central, and western regions of mainland China by the National Bureau of Statistics. The CDC was divided into the “Eastern CDC,” “Central CDC,” and “Western CDC” for observation. In terms of data coding, first, whether the Weibo account passed the official certification of Sina Weibo was coded as passed or failed. Second, the year of registration of the CDC’s government Weibo accounts was coded as follows: 2009-2011, 2012-2014, 2015-2017, or 2018-2019. Third, the update time of the most recent Weibo post was coded as follows: within 30 days, 31-90 days, 91-365 days, more than 365 days, or no content. In the analysis of the government Weibo accounts’ interactions, this study used the content analysis method to code and analyze Weibo materials, as shown in . Before the formal coding, we analyzed the reliability of the three coders. The overall reliability was 0.97, and the lowest item reliability was 0.93, which met the reliability requirements of content analysis. Dimensions and indicators of 1215 Center for Disease Control and Prevention government Weibo tweets. Topic type Disease control information Emergency information Popularization of health knowledge Popularization of disease knowledge Radiation hygiene/school hygiene Government affairs trends Policy interpretation Weibo help/citizen consultation Other Content form Only original text Posts with “pictures/videos/hyperlinks” Degree of originality Original posts Retweeted posts Reply rate of comments Reply only once Interactive reply Reply time of comments Between 0 hours and 1 hour Between 1 hour and 8 hours Between 8 and 12 hours More than 12 hours Availability of Data and Materials All the data are publicly available on the internet via the search strategy indicated in the Study Sample section. The original data are in Chinese and can be provided upon request. Ethics Approval and Consent to Participate The study was reviewed and approved by the Academic Committee of the School of Journalism and Communication at Chongqing University, which acts as an ethics committee. According to the committee’s review report, the sample of this study are nonparticipants. Therefore, this study does not violate research ethics.
The CDC is a state-funded bureau under the leadership of the National Health Commission of China that specializes in disease control and prevention, and public health. According to the official website, its mission is to create a safe and healthy environment, maintain social stability, ensure national security and promote the health of people through the prevention and control of diseases, injuries, and disabilities. There are corresponding institutions from the central government to the local government. Sina Weibo is one of the most popular social media sites in China and is similar to Twitter. As of December 2019, there were 139,000 government agencies with registered Weibo accounts on the platform, according to the China Internet Network Information Center 45th Statistical Report on the Development of the Internet in China . Because Weibo is highly influential, government agencies set up government Weibo accounts on the Sina Weibo platform. A total of 134 sample accounts were collected. The sample collection process was as follows. First, through the “find-search-user” function on the Sina Weibo client, we conducted searches with “xx province + CDC” (“xx省+疾病预防控制中心”), “xx City + CDC” (“xx市+疾病预防控制中心”), “Centers for Disease Control and Prevention” (“疾控中心”), and “disease control and prevention” (“疾控”) as the keywords for the search. For example, “Jiangsu Centers for Disease Control and Prevention” (“江苏省疾病预防控制中心”), “Nanjing Centers for Disease Control and Prevention” (“南京市疾病预防控制中心”), and “Chongqing Disease Control and Prevention” (“重庆疾控”) are Weibo accounts that meet the requirements. Second, we visited the CDC’s official website and used the Weibo account announced by the CDC on its official website as a sample. The samples were collected from March 21, 2019, to March 30, 2019. To understand the use of the CDC’s government Weibo accounts during COVID-19, this study also observed the data for these 134 sample accounts from January 1 to June 30, 2020. To ensure that there were no omissions, the collection process was jointly undertaken by a teacher and two trained master’s degree students, and the collection results of the three were found to be completely consistent.
Data collection for this study was mainly done using the Sina Weibo webpage. shows the Sina Weibo webpage of a CDC facility in China. We collected the latest 10 Weibo tweets posted by these 134 CDC government Weibo accounts before midnight on March 30, 2019. Considering that some Weibo accounts had posted less than 10 tweets or no tweets since their registration, we could not collect all 1340 tweets. Therefore, this study collected a total of 1215 Weibo tweets. The data collected mainly included topic type, content form, degree of originality, reply rate of comments, and reply time of comments. In addition, this study collected supplemental relevant data on the overall performances of the CDC’s government Weibo accounts in the pandemic from January 1 to June 30, 2020. In this study, the adoption and operation of the CDC’s government Weibo accounts were also included in the survey. In the survey on the adoption of government Weibo accounts, the account registration time was collected. In the survey on the operation of government Weibo accounts, the data collected included the following: whether they received official certification from Sina Weibo, the update time for the most recent Weibo post, and the number of followers.
As China’s regional economy has shown uneven development in the eastern, central, and western regions , which may have an impact on the development of the CDC’s government Weibo accounts, this study is based on the division of the eastern, central, and western regions of mainland China by the National Bureau of Statistics. The CDC was divided into the “Eastern CDC,” “Central CDC,” and “Western CDC” for observation. In terms of data coding, first, whether the Weibo account passed the official certification of Sina Weibo was coded as passed or failed. Second, the year of registration of the CDC’s government Weibo accounts was coded as follows: 2009-2011, 2012-2014, 2015-2017, or 2018-2019. Third, the update time of the most recent Weibo post was coded as follows: within 30 days, 31-90 days, 91-365 days, more than 365 days, or no content. In the analysis of the government Weibo accounts’ interactions, this study used the content analysis method to code and analyze Weibo materials, as shown in . Before the formal coding, we analyzed the reliability of the three coders. The overall reliability was 0.97, and the lowest item reliability was 0.93, which met the reliability requirements of content analysis. Dimensions and indicators of 1215 Center for Disease Control and Prevention government Weibo tweets. Topic type Disease control information Emergency information Popularization of health knowledge Popularization of disease knowledge Radiation hygiene/school hygiene Government affairs trends Policy interpretation Weibo help/citizen consultation Other Content form Only original text Posts with “pictures/videos/hyperlinks” Degree of originality Original posts Retweeted posts Reply rate of comments Reply only once Interactive reply Reply time of comments Between 0 hours and 1 hour Between 1 hour and 8 hours Between 8 and 12 hours More than 12 hours
All the data are publicly available on the internet via the search strategy indicated in the Study Sample section. The original data are in Chinese and can be provided upon request.
The study was reviewed and approved by the Academic Committee of the School of Journalism and Communication at Chongqing University, which acts as an ethics committee. According to the committee’s review report, the sample of this study are nonparticipants. Therefore, this study does not violate research ethics.
Popularization of the CDC’s Government Weibo Accounts Geographical Distribution of the CDC’s Government Weibo Adoption There are a total of 450 CDC facilities in 31 provincial- and prefecture-level administrative regions in mainland China, as shown in . In total, 31 provincial-level CDC government Weibo accounts should be registered, but only 8 have been registered, with a registration rate of 25.8%. Additionally, 419 prefectural-level CDC government Weibo accounts should be registered but only 126 have been registered, with a registration rate of 30.1%. There are regional differences in the adoption of the CDC’s government Weibo accounts, and the registration rate shows a decreasing trend in the eastern region, the central region, and the western region. provides the distribution of CDC facilities and the CDC’s government Weibo accounts in mainland China. Of the total 134 accounts, the number of Weibo accounts in the eastern region (n=68, 50.7%) is higher than those in the central region (n=30, 22.4%) and the western region (n=36, 26.9%). There are a total of 158 CDC facilities in the eastern region, and 68 of these have registered government Weibo accounts, with a registration rate of 43.0%. The highest registration rate is in the capital, Beijing (17/17, 100.0%), and the lowest is in Hainan (0/5, 0%). There are 112 CDC facilities in the central region, and 30 of them have registered Weibo accounts, with a registration rate of 26.8%. Henan Province (10/18, 55.6%) has the highest rate, and Heilongjiang Province (0/14, 0%) has the lowest rate. The number of CDC facilities in the western region is 180, of which 36 have registered Weibo accounts, with a registration rate of 20.0%. In the western region, Ningxia (3/6, 50.0%) has the highest registration rate, while Tibet (0/8, 0%) and Qinghai (0/9, 0%) have the lowest rate. Time Distribution of the CDC’s Government Weibo Account Adoption The CDC’s government Weibo account adoption has a trend of increasing year by year, but there are still some provinces that have not registered government Weibo accounts. The first CDC facility to register a Weibo account in mainland China was the CDC in Lianyungang, Jiangsu Province, which registered on January 17, 2011. Since then, the registration rate of government Weibo accounts has increased year by year. In 2011, of the total 450 CDC facilities, 22 CDC facilities in 14 administrative regions registered government Weibo accounts, with a registration rate of 4.9% . From 2012 to 2014, there were 80 additional CDC facilities with registered government Weibo accounts, increasing the registration rate to 22.7% (102/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2015 to 2017, there were 24 additional CDC facilities with registered government Weibo accounts, with a registration rate of 28.0% (126/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2018 to March 2019, there were 8 new CDC facilities with registered government Weibo accounts, increasing the registration rate to 29.8% (134/450), and overall, the accounts were distributed in 29 provincial administrative regions . Currently, the 3 provincial administrative regions of Heilongjiang, Qinghai, and Hainan have not registered government Weibo accounts. Hierarchical Distribution of the CDC’s Government Weibo Adoption The central-level CDC registered a government Weibo account later than the provincial-level CDC, and the provincial-level CDC registered later than the prefecture-level CDC, showing a “bottom-up” policy learning process. A total of 450 CDC facilities at the provincial and prefectural levels in China have registered Weibo accounts, of which the earliest one is the CDC in Lianyungang, Jiangsu Province, which is at the prefectural level. The registration time was January 17, 2011. On March 31, 2011, the first provincial CDC (Hunan CDC) registered a government Weibo account, which was later than the account registration time of the prefecture-level administrative district. The registration time of the official Weibo, “Science Popularization of Disease Control and Prevention,” of the Chinese Centers for Disease Control and Prevention (as a central institution) was August 23, 2019, which was more than 9 years later than the prefecture-level CDC and provincial administrative regions where Weibo accounts were first registered. In addition, during this period, more than 100 CDC facilities at the provincial and prefectural levels in mainland China registered government Weibo accounts. Operation of the CDC’s Government Weibo Accounts The CDC’s Government Weibo Accounts Operating Certification The Blue V certification is how Sina Weibo authenticates government, media, institutional, and other official accounts, as shown in the bottom right of the Weibo profile photo in . This logo shows that Sina Weibo has verified that an account is an organization’s official account, and the main body of the account is more authoritative and authentic, which can help the public to accurately identify official accounts. Of the 134 CDC facilities that have registered Sina Weibo accounts, 88.1% (n=118) of the accounts have been certified with Blue Vs, and the average number of Weibo followers with V-certified accounts is 18,753. There are 16 CDC accounts without certification, and followers do not pay attention to the non–V-certified accounts. One exception is an account that has 12,514 followers; the other Weibo accounts have less than 500 followers. Dropout in the Use of the CDC’s Government Weibo Accounts The 134 CDC Weibo accounts have different degrees of use and dropout . A few of them are “zombie microblogs,” that is, 3.7% (n=5) of the accounts have not posted any content since registering their Weibo account. Some of the accounts are inactive. Only 37.3% (n=50) of the accounts had posted tweets in the past 30 days, 7.5% (n=10) of the accounts had posted tweets in the last 31-90 days, 16.4% (n=22) of the accounts had posted tweets in the last 91-365 days, and 35.1% (n=47) of the accounts had not posted tweets in more than 1 year. Among the latter, most accounts are in the eastern region, accounting for 15.7% (n=21), which is higher than those from the western region at 11.2% (n=15) and those from the central region at 8.2% (n=11). It can be seen that, although the registration rate in the eastern region is relatively high, the dropout rate of more than 1 year is also relatively high. Followers of the CDC’s Government Weibo Accounts Were Polarized The total number of followers on the CDC’s government Weibo accounts was 3,588,544, the average was 26,780 (SD 165,506), and the median was 496. Among the accounts, the Changsha CDC had the largest number of followers, with a total of 1,357,440 followers. The one with the least number of followers was the Zhangye CDC, with a total of 2 followers. The total number of followers for 7 of the CDC’s government Weibo accounts was less than ten; 17 of the CDC’s government Weibo accounts had more than 10,000 followers; and 3 of the CDC’s government Weibo accounts had more than 100,000 followers. These 3 accounts were the Hebei CDC (n=151,289), the Hunan CDC (n=1,331,173), and the Changsha CDC (n=1,357,440). Interaction of the CDC’s Government Weibo Accounts Reply Rate to Comments on the CDC’s Government Weibo Accounts Among the 1215 tweets selected for content analysis in this study, only 12 of the public comments received replies from the government, accounting for less than 1.0%. Statistical analysis of the reply rate and the reply time of the 12 replies found that 50.0% (n=6) of the replies to the comments on Weibo were in the form of “reply only once” and that the remaining 50.0% (n=6) were in the form of “interactive reply.” The response time of 66.7% (n=8) of the Weibo comments was between 0 hours and 1 hour, 8.3% (n=1) of the response times to Weibo comments were between 1 hour and 8 hours, and 25.0% (n=3) of the response times to Weibo comments were more than 12 hours. Influence of the Topic Type on the Interaction Among the 1215 tweets across all topics, popularizing health knowledge had the most, reaching 606 (49.9%) tweets . Disease control information and popularization of disease knowledge both accounted for more than 10.0%. The number of tweets about policy interpretation, emergency response, and Weibo help and citizen consultation had the least, with the number of posts accounting for 0.6% (n=7), 1.2% (n=15), and 1.2% (n=14), respectively. In terms of the number of Weibo retweets, comments, and likes, emergency information posts ranked first, with each post being retweeted 4.1 times, commented on 2.9 times, and liked 4.0 times on average. Policy interpretation received the least number of comments, all of which were 0. This shows that communication effect is better here than for other topic types in dealing with public health emergencies, and it has become the platform for interaction between the government and the public. Influence of the Content Form on the Interactive Effect Among the 1215 tweets, the number of posts with “only original text” was 222 (18.3%). The average numbers of retweets, comments, and likes with “only original text” posts were 0.3, 0.3, and 0.3, respectively. On the other hand, there were 993 (81.7%) posts with “pictures/videos/hyperlinks.” The average numbers of retweets, comments, and likes with “pictures/videos/hyperlinks” posts were 0.6, 0.6, and 0.5, respectively. It can be seen that the average numbers of retweets, comments, and likes of microblogs with “pictures/videos/hyperlinks” were higher than those of microblogs with original text, and the interactive effect was better. Influence of the Original Post on the Interactive Effect Among the 1215 tweets, the number of “original posts” was 703 (57.9%), and the average numbers of retweets, comments, and likes for “original posts” were 0.6, 0.6, and 0.7, respectively. Additionally, the average number of “retweet posts” was 512 (42.1%), and the average numbers of retweets, comments, and likes for “retweeted posts” were 0.5, 0.4, and 0.2, respectively. It can be seen that the average number of retweets, comments, and likes for original posts were higher than those of retweeted posts, indicating that the original content was more in line with the public’s preference. Performances of the CDC’s Government Weibo Accounts During COVID-19 Activity Rate of the CDC’s Government Weibo Accounts Of the 134 Weibo accounts, 15.7% (n=21) had a high level of activity, posting more than 5 tweets per day, and 11.2% (n=15) were moderately active, with 1-2 tweets per day. In addition, 23.1% (n=31) posted 1-10 tweets per month. These three categories add up to exactly 50%. However, 46.3% (n=62) of the CDC’s government Weibo accounts have not been updated for more than 1 year, an increase of 11.2% (n=15) over the number of accounts before the epidemic. Main Content of the CDC’s Government Weibo Accounts: COVID-19 Compared with the pre-epidemic statistics, there were approximately 60,000 new tweets and 1.4 million new followers on the 134 CDC accounts. Of the tweets, 90% were about public health emergencies related to COVID-19. These tweets can be divided into four categories. The first is updating the daily epidemic situation including new confirmed cases, new deaths, new suspected cases, new asymptomatic infected people, cumulative cured and discharged cases, life and medical tracking of confirmed cases, etc. The second was announcing the CDC’s work such as the details of procurement announcements. The third was educating the public about the epidemic, including the issuance of protection guidelines for specific places such as schools, companies, shopping malls, and subways; protection guidelines for specific groups such as pregnant women, couriers, taxi drivers, and sanitation workers; nutritional dietary guidelines during the epidemic; and the popularization of disinfectants and protective products. The fourth was publicizing the typical deeds and dedication of the antiepidemic pioneers, especially the CDC. Interaction of the CDC’s Government Weibo Accounts: Decreased Month by Month At the beginning of the pandemic, the number of posts, comments, and likes was the highest. For example, on January 24, 2020, the tweet “A Letter from Beijing CDC to friends from all over the country who come (return) to Beijing” was released by the Beijing CDC, and the total number of posts, comments, and likes was more than 2000. With the normalization of epidemic prevention and control, the amount of interaction is decreasing. During the epidemic, the questions that the public responded to in the comment area were broadly divided into the following three categories: epidemic prevention and control policies, specific information about new cases, and praising the CDC for its efforts. However, the CDC’s government Weibo accounts still rarely respond to public comments, and only a few of the CDC’s government Weibo accounts provide an office phone number, which is similar to the results of previous studies.
Geographical Distribution of the CDC’s Government Weibo Adoption There are a total of 450 CDC facilities in 31 provincial- and prefecture-level administrative regions in mainland China, as shown in . In total, 31 provincial-level CDC government Weibo accounts should be registered, but only 8 have been registered, with a registration rate of 25.8%. Additionally, 419 prefectural-level CDC government Weibo accounts should be registered but only 126 have been registered, with a registration rate of 30.1%. There are regional differences in the adoption of the CDC’s government Weibo accounts, and the registration rate shows a decreasing trend in the eastern region, the central region, and the western region. provides the distribution of CDC facilities and the CDC’s government Weibo accounts in mainland China. Of the total 134 accounts, the number of Weibo accounts in the eastern region (n=68, 50.7%) is higher than those in the central region (n=30, 22.4%) and the western region (n=36, 26.9%). There are a total of 158 CDC facilities in the eastern region, and 68 of these have registered government Weibo accounts, with a registration rate of 43.0%. The highest registration rate is in the capital, Beijing (17/17, 100.0%), and the lowest is in Hainan (0/5, 0%). There are 112 CDC facilities in the central region, and 30 of them have registered Weibo accounts, with a registration rate of 26.8%. Henan Province (10/18, 55.6%) has the highest rate, and Heilongjiang Province (0/14, 0%) has the lowest rate. The number of CDC facilities in the western region is 180, of which 36 have registered Weibo accounts, with a registration rate of 20.0%. In the western region, Ningxia (3/6, 50.0%) has the highest registration rate, while Tibet (0/8, 0%) and Qinghai (0/9, 0%) have the lowest rate. Time Distribution of the CDC’s Government Weibo Account Adoption The CDC’s government Weibo account adoption has a trend of increasing year by year, but there are still some provinces that have not registered government Weibo accounts. The first CDC facility to register a Weibo account in mainland China was the CDC in Lianyungang, Jiangsu Province, which registered on January 17, 2011. Since then, the registration rate of government Weibo accounts has increased year by year. In 2011, of the total 450 CDC facilities, 22 CDC facilities in 14 administrative regions registered government Weibo accounts, with a registration rate of 4.9% . From 2012 to 2014, there were 80 additional CDC facilities with registered government Weibo accounts, increasing the registration rate to 22.7% (102/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2015 to 2017, there were 24 additional CDC facilities with registered government Weibo accounts, with a registration rate of 28.0% (126/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2018 to March 2019, there were 8 new CDC facilities with registered government Weibo accounts, increasing the registration rate to 29.8% (134/450), and overall, the accounts were distributed in 29 provincial administrative regions . Currently, the 3 provincial administrative regions of Heilongjiang, Qinghai, and Hainan have not registered government Weibo accounts. Hierarchical Distribution of the CDC’s Government Weibo Adoption The central-level CDC registered a government Weibo account later than the provincial-level CDC, and the provincial-level CDC registered later than the prefecture-level CDC, showing a “bottom-up” policy learning process. A total of 450 CDC facilities at the provincial and prefectural levels in China have registered Weibo accounts, of which the earliest one is the CDC in Lianyungang, Jiangsu Province, which is at the prefectural level. The registration time was January 17, 2011. On March 31, 2011, the first provincial CDC (Hunan CDC) registered a government Weibo account, which was later than the account registration time of the prefecture-level administrative district. The registration time of the official Weibo, “Science Popularization of Disease Control and Prevention,” of the Chinese Centers for Disease Control and Prevention (as a central institution) was August 23, 2019, which was more than 9 years later than the prefecture-level CDC and provincial administrative regions where Weibo accounts were first registered. In addition, during this period, more than 100 CDC facilities at the provincial and prefectural levels in mainland China registered government Weibo accounts.
There are a total of 450 CDC facilities in 31 provincial- and prefecture-level administrative regions in mainland China, as shown in . In total, 31 provincial-level CDC government Weibo accounts should be registered, but only 8 have been registered, with a registration rate of 25.8%. Additionally, 419 prefectural-level CDC government Weibo accounts should be registered but only 126 have been registered, with a registration rate of 30.1%. There are regional differences in the adoption of the CDC’s government Weibo accounts, and the registration rate shows a decreasing trend in the eastern region, the central region, and the western region. provides the distribution of CDC facilities and the CDC’s government Weibo accounts in mainland China. Of the total 134 accounts, the number of Weibo accounts in the eastern region (n=68, 50.7%) is higher than those in the central region (n=30, 22.4%) and the western region (n=36, 26.9%). There are a total of 158 CDC facilities in the eastern region, and 68 of these have registered government Weibo accounts, with a registration rate of 43.0%. The highest registration rate is in the capital, Beijing (17/17, 100.0%), and the lowest is in Hainan (0/5, 0%). There are 112 CDC facilities in the central region, and 30 of them have registered Weibo accounts, with a registration rate of 26.8%. Henan Province (10/18, 55.6%) has the highest rate, and Heilongjiang Province (0/14, 0%) has the lowest rate. The number of CDC facilities in the western region is 180, of which 36 have registered Weibo accounts, with a registration rate of 20.0%. In the western region, Ningxia (3/6, 50.0%) has the highest registration rate, while Tibet (0/8, 0%) and Qinghai (0/9, 0%) have the lowest rate.
The CDC’s government Weibo account adoption has a trend of increasing year by year, but there are still some provinces that have not registered government Weibo accounts. The first CDC facility to register a Weibo account in mainland China was the CDC in Lianyungang, Jiangsu Province, which registered on January 17, 2011. Since then, the registration rate of government Weibo accounts has increased year by year. In 2011, of the total 450 CDC facilities, 22 CDC facilities in 14 administrative regions registered government Weibo accounts, with a registration rate of 4.9% . From 2012 to 2014, there were 80 additional CDC facilities with registered government Weibo accounts, increasing the registration rate to 22.7% (102/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2015 to 2017, there were 24 additional CDC facilities with registered government Weibo accounts, with a registration rate of 28.0% (126/450), and overall, the accounts were distributed in 26 provincial administrative regions . From 2018 to March 2019, there were 8 new CDC facilities with registered government Weibo accounts, increasing the registration rate to 29.8% (134/450), and overall, the accounts were distributed in 29 provincial administrative regions . Currently, the 3 provincial administrative regions of Heilongjiang, Qinghai, and Hainan have not registered government Weibo accounts.
The central-level CDC registered a government Weibo account later than the provincial-level CDC, and the provincial-level CDC registered later than the prefecture-level CDC, showing a “bottom-up” policy learning process. A total of 450 CDC facilities at the provincial and prefectural levels in China have registered Weibo accounts, of which the earliest one is the CDC in Lianyungang, Jiangsu Province, which is at the prefectural level. The registration time was January 17, 2011. On March 31, 2011, the first provincial CDC (Hunan CDC) registered a government Weibo account, which was later than the account registration time of the prefecture-level administrative district. The registration time of the official Weibo, “Science Popularization of Disease Control and Prevention,” of the Chinese Centers for Disease Control and Prevention (as a central institution) was August 23, 2019, which was more than 9 years later than the prefecture-level CDC and provincial administrative regions where Weibo accounts were first registered. In addition, during this period, more than 100 CDC facilities at the provincial and prefectural levels in mainland China registered government Weibo accounts.
The CDC’s Government Weibo Accounts Operating Certification The Blue V certification is how Sina Weibo authenticates government, media, institutional, and other official accounts, as shown in the bottom right of the Weibo profile photo in . This logo shows that Sina Weibo has verified that an account is an organization’s official account, and the main body of the account is more authoritative and authentic, which can help the public to accurately identify official accounts. Of the 134 CDC facilities that have registered Sina Weibo accounts, 88.1% (n=118) of the accounts have been certified with Blue Vs, and the average number of Weibo followers with V-certified accounts is 18,753. There are 16 CDC accounts without certification, and followers do not pay attention to the non–V-certified accounts. One exception is an account that has 12,514 followers; the other Weibo accounts have less than 500 followers. Dropout in the Use of the CDC’s Government Weibo Accounts The 134 CDC Weibo accounts have different degrees of use and dropout . A few of them are “zombie microblogs,” that is, 3.7% (n=5) of the accounts have not posted any content since registering their Weibo account. Some of the accounts are inactive. Only 37.3% (n=50) of the accounts had posted tweets in the past 30 days, 7.5% (n=10) of the accounts had posted tweets in the last 31-90 days, 16.4% (n=22) of the accounts had posted tweets in the last 91-365 days, and 35.1% (n=47) of the accounts had not posted tweets in more than 1 year. Among the latter, most accounts are in the eastern region, accounting for 15.7% (n=21), which is higher than those from the western region at 11.2% (n=15) and those from the central region at 8.2% (n=11). It can be seen that, although the registration rate in the eastern region is relatively high, the dropout rate of more than 1 year is also relatively high. Followers of the CDC’s Government Weibo Accounts Were Polarized The total number of followers on the CDC’s government Weibo accounts was 3,588,544, the average was 26,780 (SD 165,506), and the median was 496. Among the accounts, the Changsha CDC had the largest number of followers, with a total of 1,357,440 followers. The one with the least number of followers was the Zhangye CDC, with a total of 2 followers. The total number of followers for 7 of the CDC’s government Weibo accounts was less than ten; 17 of the CDC’s government Weibo accounts had more than 10,000 followers; and 3 of the CDC’s government Weibo accounts had more than 100,000 followers. These 3 accounts were the Hebei CDC (n=151,289), the Hunan CDC (n=1,331,173), and the Changsha CDC (n=1,357,440).
The Blue V certification is how Sina Weibo authenticates government, media, institutional, and other official accounts, as shown in the bottom right of the Weibo profile photo in . This logo shows that Sina Weibo has verified that an account is an organization’s official account, and the main body of the account is more authoritative and authentic, which can help the public to accurately identify official accounts. Of the 134 CDC facilities that have registered Sina Weibo accounts, 88.1% (n=118) of the accounts have been certified with Blue Vs, and the average number of Weibo followers with V-certified accounts is 18,753. There are 16 CDC accounts without certification, and followers do not pay attention to the non–V-certified accounts. One exception is an account that has 12,514 followers; the other Weibo accounts have less than 500 followers.
The 134 CDC Weibo accounts have different degrees of use and dropout . A few of them are “zombie microblogs,” that is, 3.7% (n=5) of the accounts have not posted any content since registering their Weibo account. Some of the accounts are inactive. Only 37.3% (n=50) of the accounts had posted tweets in the past 30 days, 7.5% (n=10) of the accounts had posted tweets in the last 31-90 days, 16.4% (n=22) of the accounts had posted tweets in the last 91-365 days, and 35.1% (n=47) of the accounts had not posted tweets in more than 1 year. Among the latter, most accounts are in the eastern region, accounting for 15.7% (n=21), which is higher than those from the western region at 11.2% (n=15) and those from the central region at 8.2% (n=11). It can be seen that, although the registration rate in the eastern region is relatively high, the dropout rate of more than 1 year is also relatively high.
The total number of followers on the CDC’s government Weibo accounts was 3,588,544, the average was 26,780 (SD 165,506), and the median was 496. Among the accounts, the Changsha CDC had the largest number of followers, with a total of 1,357,440 followers. The one with the least number of followers was the Zhangye CDC, with a total of 2 followers. The total number of followers for 7 of the CDC’s government Weibo accounts was less than ten; 17 of the CDC’s government Weibo accounts had more than 10,000 followers; and 3 of the CDC’s government Weibo accounts had more than 100,000 followers. These 3 accounts were the Hebei CDC (n=151,289), the Hunan CDC (n=1,331,173), and the Changsha CDC (n=1,357,440).
Reply Rate to Comments on the CDC’s Government Weibo Accounts Among the 1215 tweets selected for content analysis in this study, only 12 of the public comments received replies from the government, accounting for less than 1.0%. Statistical analysis of the reply rate and the reply time of the 12 replies found that 50.0% (n=6) of the replies to the comments on Weibo were in the form of “reply only once” and that the remaining 50.0% (n=6) were in the form of “interactive reply.” The response time of 66.7% (n=8) of the Weibo comments was between 0 hours and 1 hour, 8.3% (n=1) of the response times to Weibo comments were between 1 hour and 8 hours, and 25.0% (n=3) of the response times to Weibo comments were more than 12 hours. Influence of the Topic Type on the Interaction Among the 1215 tweets across all topics, popularizing health knowledge had the most, reaching 606 (49.9%) tweets . Disease control information and popularization of disease knowledge both accounted for more than 10.0%. The number of tweets about policy interpretation, emergency response, and Weibo help and citizen consultation had the least, with the number of posts accounting for 0.6% (n=7), 1.2% (n=15), and 1.2% (n=14), respectively. In terms of the number of Weibo retweets, comments, and likes, emergency information posts ranked first, with each post being retweeted 4.1 times, commented on 2.9 times, and liked 4.0 times on average. Policy interpretation received the least number of comments, all of which were 0. This shows that communication effect is better here than for other topic types in dealing with public health emergencies, and it has become the platform for interaction between the government and the public. Influence of the Content Form on the Interactive Effect Among the 1215 tweets, the number of posts with “only original text” was 222 (18.3%). The average numbers of retweets, comments, and likes with “only original text” posts were 0.3, 0.3, and 0.3, respectively. On the other hand, there were 993 (81.7%) posts with “pictures/videos/hyperlinks.” The average numbers of retweets, comments, and likes with “pictures/videos/hyperlinks” posts were 0.6, 0.6, and 0.5, respectively. It can be seen that the average numbers of retweets, comments, and likes of microblogs with “pictures/videos/hyperlinks” were higher than those of microblogs with original text, and the interactive effect was better. Influence of the Original Post on the Interactive Effect Among the 1215 tweets, the number of “original posts” was 703 (57.9%), and the average numbers of retweets, comments, and likes for “original posts” were 0.6, 0.6, and 0.7, respectively. Additionally, the average number of “retweet posts” was 512 (42.1%), and the average numbers of retweets, comments, and likes for “retweeted posts” were 0.5, 0.4, and 0.2, respectively. It can be seen that the average number of retweets, comments, and likes for original posts were higher than those of retweeted posts, indicating that the original content was more in line with the public’s preference.
Among the 1215 tweets selected for content analysis in this study, only 12 of the public comments received replies from the government, accounting for less than 1.0%. Statistical analysis of the reply rate and the reply time of the 12 replies found that 50.0% (n=6) of the replies to the comments on Weibo were in the form of “reply only once” and that the remaining 50.0% (n=6) were in the form of “interactive reply.” The response time of 66.7% (n=8) of the Weibo comments was between 0 hours and 1 hour, 8.3% (n=1) of the response times to Weibo comments were between 1 hour and 8 hours, and 25.0% (n=3) of the response times to Weibo comments were more than 12 hours.
Among the 1215 tweets across all topics, popularizing health knowledge had the most, reaching 606 (49.9%) tweets . Disease control information and popularization of disease knowledge both accounted for more than 10.0%. The number of tweets about policy interpretation, emergency response, and Weibo help and citizen consultation had the least, with the number of posts accounting for 0.6% (n=7), 1.2% (n=15), and 1.2% (n=14), respectively. In terms of the number of Weibo retweets, comments, and likes, emergency information posts ranked first, with each post being retweeted 4.1 times, commented on 2.9 times, and liked 4.0 times on average. Policy interpretation received the least number of comments, all of which were 0. This shows that communication effect is better here than for other topic types in dealing with public health emergencies, and it has become the platform for interaction between the government and the public.
Among the 1215 tweets, the number of posts with “only original text” was 222 (18.3%). The average numbers of retweets, comments, and likes with “only original text” posts were 0.3, 0.3, and 0.3, respectively. On the other hand, there were 993 (81.7%) posts with “pictures/videos/hyperlinks.” The average numbers of retweets, comments, and likes with “pictures/videos/hyperlinks” posts were 0.6, 0.6, and 0.5, respectively. It can be seen that the average numbers of retweets, comments, and likes of microblogs with “pictures/videos/hyperlinks” were higher than those of microblogs with original text, and the interactive effect was better.
Among the 1215 tweets, the number of “original posts” was 703 (57.9%), and the average numbers of retweets, comments, and likes for “original posts” were 0.6, 0.6, and 0.7, respectively. Additionally, the average number of “retweet posts” was 512 (42.1%), and the average numbers of retweets, comments, and likes for “retweeted posts” were 0.5, 0.4, and 0.2, respectively. It can be seen that the average number of retweets, comments, and likes for original posts were higher than those of retweeted posts, indicating that the original content was more in line with the public’s preference.
Activity Rate of the CDC’s Government Weibo Accounts Of the 134 Weibo accounts, 15.7% (n=21) had a high level of activity, posting more than 5 tweets per day, and 11.2% (n=15) were moderately active, with 1-2 tweets per day. In addition, 23.1% (n=31) posted 1-10 tweets per month. These three categories add up to exactly 50%. However, 46.3% (n=62) of the CDC’s government Weibo accounts have not been updated for more than 1 year, an increase of 11.2% (n=15) over the number of accounts before the epidemic. Main Content of the CDC’s Government Weibo Accounts: COVID-19 Compared with the pre-epidemic statistics, there were approximately 60,000 new tweets and 1.4 million new followers on the 134 CDC accounts. Of the tweets, 90% were about public health emergencies related to COVID-19. These tweets can be divided into four categories. The first is updating the daily epidemic situation including new confirmed cases, new deaths, new suspected cases, new asymptomatic infected people, cumulative cured and discharged cases, life and medical tracking of confirmed cases, etc. The second was announcing the CDC’s work such as the details of procurement announcements. The third was educating the public about the epidemic, including the issuance of protection guidelines for specific places such as schools, companies, shopping malls, and subways; protection guidelines for specific groups such as pregnant women, couriers, taxi drivers, and sanitation workers; nutritional dietary guidelines during the epidemic; and the popularization of disinfectants and protective products. The fourth was publicizing the typical deeds and dedication of the antiepidemic pioneers, especially the CDC. Interaction of the CDC’s Government Weibo Accounts: Decreased Month by Month At the beginning of the pandemic, the number of posts, comments, and likes was the highest. For example, on January 24, 2020, the tweet “A Letter from Beijing CDC to friends from all over the country who come (return) to Beijing” was released by the Beijing CDC, and the total number of posts, comments, and likes was more than 2000. With the normalization of epidemic prevention and control, the amount of interaction is decreasing. During the epidemic, the questions that the public responded to in the comment area were broadly divided into the following three categories: epidemic prevention and control policies, specific information about new cases, and praising the CDC for its efforts. However, the CDC’s government Weibo accounts still rarely respond to public comments, and only a few of the CDC’s government Weibo accounts provide an office phone number, which is similar to the results of previous studies.
Of the 134 Weibo accounts, 15.7% (n=21) had a high level of activity, posting more than 5 tweets per day, and 11.2% (n=15) were moderately active, with 1-2 tweets per day. In addition, 23.1% (n=31) posted 1-10 tweets per month. These three categories add up to exactly 50%. However, 46.3% (n=62) of the CDC’s government Weibo accounts have not been updated for more than 1 year, an increase of 11.2% (n=15) over the number of accounts before the epidemic.
Compared with the pre-epidemic statistics, there were approximately 60,000 new tweets and 1.4 million new followers on the 134 CDC accounts. Of the tweets, 90% were about public health emergencies related to COVID-19. These tweets can be divided into four categories. The first is updating the daily epidemic situation including new confirmed cases, new deaths, new suspected cases, new asymptomatic infected people, cumulative cured and discharged cases, life and medical tracking of confirmed cases, etc. The second was announcing the CDC’s work such as the details of procurement announcements. The third was educating the public about the epidemic, including the issuance of protection guidelines for specific places such as schools, companies, shopping malls, and subways; protection guidelines for specific groups such as pregnant women, couriers, taxi drivers, and sanitation workers; nutritional dietary guidelines during the epidemic; and the popularization of disinfectants and protective products. The fourth was publicizing the typical deeds and dedication of the antiepidemic pioneers, especially the CDC.
At the beginning of the pandemic, the number of posts, comments, and likes was the highest. For example, on January 24, 2020, the tweet “A Letter from Beijing CDC to friends from all over the country who come (return) to Beijing” was released by the Beijing CDC, and the total number of posts, comments, and likes was more than 2000. With the normalization of epidemic prevention and control, the amount of interaction is decreasing. During the epidemic, the questions that the public responded to in the comment area were broadly divided into the following three categories: epidemic prevention and control policies, specific information about new cases, and praising the CDC for its efforts. However, the CDC’s government Weibo accounts still rarely respond to public comments, and only a few of the CDC’s government Weibo accounts provide an office phone number, which is similar to the results of previous studies.
Principal Findings An increasing number of public health authorities in China have actively adopted new information platforms and tools for information disclosure and communication, which is the first step in improving communication between the government and the public. First, the registration rates of the Chinese CDC’s government Weibo accounts in the central and western regions are lower than that in the eastern region. This may be influenced by the government’s motivation and ability to adopt new technologies. According to the “motivation-capability” framework, whether the government adopts new technology or not mainly depends on the motivation and ability of the government; only when that motivation and innovation ability are strong will the new technology be used . Many studies have shown that there is a positive correlation between the level of economic development and the level of government information development . Compared with the eastern region, the central and western regions of China are at a geographical disadvantage in terms of economic development, openness, and financial resources , so their ability to use government Weibo accounts is also relatively low. Second, the diffusion process of social media adoption among China’s public health authorities presents two characteristics: one is preferential diffusion among neighboring provinces, that is, horizontal learning and imitation, and the second is the vertical diffusion of local policies from the bottom to the top. Previous studies have shown that due to the influence of the “neighborhood effect,” it is easier for a government to follow the examples of the neighboring leading regions’ governments and learn from the similar experiences of neighboring governments, which helps to improve the success rate of innovation and effectively avoid risks . Previous studies have also confirmed the positive role of the policy learning process . The central government also takes the policy innovation of the local government as the source of policy learning, and once the local policy is successful, it will revise the corresponding policy in time . Third, the social media operations of Chinese public health authorities are still in a passive state. Although nearly 90% of the accounts have official authentication, which can help the public to quickly identify official accounts, and some accounts have a strong ability to reach followers, the overall activity of the accounts was low. Previous studies have shown that the more government is involved in social media operations, the higher the public’s expectation of government interaction . A negative operational status is likely to dampen the public’s enthusiasm for online participation and may not even live up to the public’s relationship expectations . Only by continuously and actively operating social media can we better maintain a normal relationship between public health authorities and the public. Therefore, once the government registers a social media account, it must maintain their social media activity and update daily information frequently. Fourth, the use of social media by Chinese public health authorities is more inclined to be one-way information dissemination such as popularizing health knowledge, while two-way communication with the public is still limited. For China, where scientific literacy is generally low, the popularization of basic health knowledge is important, but what is more important is how the government mobilizes and communicates using social media to encourage the public to participate in dialogue and cooperation. Especially in the case of limited traditional communication channels, the role of social media is more prominent. Previous studies have indicated that the new dimension that social media brings to the field of public health is that it can change the nature and speed of the interaction between the public and public health authorities . Therefore, governments should use social media not only as a channel to release public health information and transmit health information to the public promptly but also to have two-way dialogues with the public to increase public participation in all stages. This will allow social media to become the best practice for improving communication between the government and the public. Fifth, the use of social media by Chinese public health authorities provided an important channel of information disclosure and communication for the public during COVID-19, and it generally performed better than before the epidemic, although it still fell short of the Chinese government and public’s requirements. The State Council of China requires that “public messages on the government Weibo should be carefully reviewed, released and processed” , but the CDC’s government Weibo accounts tend to be “one-way” by informing the public of the latest developments of the epidemic, and they fail to respond to public inquiries and the large amount of misinformation during the epidemic in a timely manner. In addition, this study shows that the social media interaction effect during the period from January to June 2020 showed a declining trend. This is also consistent with previous research that the government and the public discussion trends in social media can predict the evolution of an epidemic’s dynamics . As the epidemic becomes normalized, the public interest in the dynamics of the epidemic, control policies, and guidelines for prevention and control is waning . Moreover, whether we can maximize the function of social media for public health authorities also depends on the changes of the administrative system and political culture. In China’s centralized political system, the government is dominant in the relationship between the government and the public, and China’s current top-down decision-making and execution mechanism has many bureaucratic levels, which are not conducive to the effective transmission of information. In addition, the common people usually hold the idea of it being “difficult to deal with the government” and are reluctant to communicate with the government; therefore, there is a large psychological distance between the government and the public . Social media provides an opportunity to improve the interaction between the government and the public. Chinese public health authorities must break the thinking mode of the “official standard,” rethink the boundary between the government and the public, and promote the harmonious development of relations between the government and the public. Limitations There are some limitations in this study. First, the survey samples of this study did not include samples of county-level administrative regions (county-level administration regions are governed by prefecture-level administrative regions). Future research can study samples of the county-level administration regions and expand the research results. Second, this study only evaluates the government Weibo accounts; however, government Wechat accounts, as an emerging government social media platform, also have research value. Third, this study only uses descriptive statistics and content analysis, and did not investigate the psychology and behavior of the audience. Future research can use questionnaires, interviews, and other methods to further explore government Weibo use by the audience. Conclusions This study examines the current situation and interaction of social media use by public health authorities in China, a non-Western democratic country. This study analyzes the CDC’s government Weibo accounts for the provincial- and prefecture-level administrative regions in mainland China, and explores how the public health authorities in China improve communication between the government and the public through social media. The results show that the adoption of government Weibo accounts has an uneven regional geographical distribution, steady diffusion year by year over time, and hierarchical bottom-up diffusion. Regarding the operations of government Weibo accounts, nearly 90.0% of government Weibo accounts have official certification, but there are dropouts in the specific operating process. One-third of the accounts have not provided updates for more than 1 year, and the number of microblog followers is polarized, with a maximum and minimum difference of 1 million. Regarding the interaction of government Weibo accounts, although the government Weibo accounts have changed the original layer-by-layer communication mode making communication between the government and the public more convenient, the Chinese government currently is more inclined to release one-way information, and the interaction with the public is limited. The response rate to comments was less than 1%. In terms of the influencing factors, emergency information, multimedia content, and original content are more helpful to promote communication between the government and the public. In the event of a public health emergency such as COVID-19, these accounts can function by updating epidemic information and protection information for the public, although there is still a gap in the two-way interaction. In general, government Weibo use is the first step in improving communication between the government and the public, but the effect is limited and needs to be improved.
An increasing number of public health authorities in China have actively adopted new information platforms and tools for information disclosure and communication, which is the first step in improving communication between the government and the public. First, the registration rates of the Chinese CDC’s government Weibo accounts in the central and western regions are lower than that in the eastern region. This may be influenced by the government’s motivation and ability to adopt new technologies. According to the “motivation-capability” framework, whether the government adopts new technology or not mainly depends on the motivation and ability of the government; only when that motivation and innovation ability are strong will the new technology be used . Many studies have shown that there is a positive correlation between the level of economic development and the level of government information development . Compared with the eastern region, the central and western regions of China are at a geographical disadvantage in terms of economic development, openness, and financial resources , so their ability to use government Weibo accounts is also relatively low. Second, the diffusion process of social media adoption among China’s public health authorities presents two characteristics: one is preferential diffusion among neighboring provinces, that is, horizontal learning and imitation, and the second is the vertical diffusion of local policies from the bottom to the top. Previous studies have shown that due to the influence of the “neighborhood effect,” it is easier for a government to follow the examples of the neighboring leading regions’ governments and learn from the similar experiences of neighboring governments, which helps to improve the success rate of innovation and effectively avoid risks . Previous studies have also confirmed the positive role of the policy learning process . The central government also takes the policy innovation of the local government as the source of policy learning, and once the local policy is successful, it will revise the corresponding policy in time . Third, the social media operations of Chinese public health authorities are still in a passive state. Although nearly 90% of the accounts have official authentication, which can help the public to quickly identify official accounts, and some accounts have a strong ability to reach followers, the overall activity of the accounts was low. Previous studies have shown that the more government is involved in social media operations, the higher the public’s expectation of government interaction . A negative operational status is likely to dampen the public’s enthusiasm for online participation and may not even live up to the public’s relationship expectations . Only by continuously and actively operating social media can we better maintain a normal relationship between public health authorities and the public. Therefore, once the government registers a social media account, it must maintain their social media activity and update daily information frequently. Fourth, the use of social media by Chinese public health authorities is more inclined to be one-way information dissemination such as popularizing health knowledge, while two-way communication with the public is still limited. For China, where scientific literacy is generally low, the popularization of basic health knowledge is important, but what is more important is how the government mobilizes and communicates using social media to encourage the public to participate in dialogue and cooperation. Especially in the case of limited traditional communication channels, the role of social media is more prominent. Previous studies have indicated that the new dimension that social media brings to the field of public health is that it can change the nature and speed of the interaction between the public and public health authorities . Therefore, governments should use social media not only as a channel to release public health information and transmit health information to the public promptly but also to have two-way dialogues with the public to increase public participation in all stages. This will allow social media to become the best practice for improving communication between the government and the public. Fifth, the use of social media by Chinese public health authorities provided an important channel of information disclosure and communication for the public during COVID-19, and it generally performed better than before the epidemic, although it still fell short of the Chinese government and public’s requirements. The State Council of China requires that “public messages on the government Weibo should be carefully reviewed, released and processed” , but the CDC’s government Weibo accounts tend to be “one-way” by informing the public of the latest developments of the epidemic, and they fail to respond to public inquiries and the large amount of misinformation during the epidemic in a timely manner. In addition, this study shows that the social media interaction effect during the period from January to June 2020 showed a declining trend. This is also consistent with previous research that the government and the public discussion trends in social media can predict the evolution of an epidemic’s dynamics . As the epidemic becomes normalized, the public interest in the dynamics of the epidemic, control policies, and guidelines for prevention and control is waning . Moreover, whether we can maximize the function of social media for public health authorities also depends on the changes of the administrative system and political culture. In China’s centralized political system, the government is dominant in the relationship between the government and the public, and China’s current top-down decision-making and execution mechanism has many bureaucratic levels, which are not conducive to the effective transmission of information. In addition, the common people usually hold the idea of it being “difficult to deal with the government” and are reluctant to communicate with the government; therefore, there is a large psychological distance between the government and the public . Social media provides an opportunity to improve the interaction between the government and the public. Chinese public health authorities must break the thinking mode of the “official standard,” rethink the boundary between the government and the public, and promote the harmonious development of relations between the government and the public.
There are some limitations in this study. First, the survey samples of this study did not include samples of county-level administrative regions (county-level administration regions are governed by prefecture-level administrative regions). Future research can study samples of the county-level administration regions and expand the research results. Second, this study only evaluates the government Weibo accounts; however, government Wechat accounts, as an emerging government social media platform, also have research value. Third, this study only uses descriptive statistics and content analysis, and did not investigate the psychology and behavior of the audience. Future research can use questionnaires, interviews, and other methods to further explore government Weibo use by the audience.
This study examines the current situation and interaction of social media use by public health authorities in China, a non-Western democratic country. This study analyzes the CDC’s government Weibo accounts for the provincial- and prefecture-level administrative regions in mainland China, and explores how the public health authorities in China improve communication between the government and the public through social media. The results show that the adoption of government Weibo accounts has an uneven regional geographical distribution, steady diffusion year by year over time, and hierarchical bottom-up diffusion. Regarding the operations of government Weibo accounts, nearly 90.0% of government Weibo accounts have official certification, but there are dropouts in the specific operating process. One-third of the accounts have not provided updates for more than 1 year, and the number of microblog followers is polarized, with a maximum and minimum difference of 1 million. Regarding the interaction of government Weibo accounts, although the government Weibo accounts have changed the original layer-by-layer communication mode making communication between the government and the public more convenient, the Chinese government currently is more inclined to release one-way information, and the interaction with the public is limited. The response rate to comments was less than 1%. In terms of the influencing factors, emergency information, multimedia content, and original content are more helpful to promote communication between the government and the public. In the event of a public health emergency such as COVID-19, these accounts can function by updating epidemic information and protection information for the public, although there is still a gap in the two-way interaction. In general, government Weibo use is the first step in improving communication between the government and the public, but the effect is limited and needs to be improved.
|
Diagnostic Accuracy of Immunohistochemistry for HER2-Positive Breast Cancer | 0089618f-7fef-4d3a-94c8-279170ff0163 | 10909106 | Anatomy[mh] | Breast cancer is the most frequent female malignancy in the world, and also in Thailand. This cancer is the leading cause of death, and a significant economic and social concern. In Thailand in 2020, there will be 22,158 new breast cancer diagnoses and 8,266 deaths (Arnold et al., 2022). Despite the great efficacy of screening and early detection methods such as mammograms and breast self-examination in Thailand, the prevalence of breast cancer has been gradually growing (Lakha et al., 2020). Furthermore, 10% to 30% of all breast cancer cases have HER2 protein overexpression or gene amplification (Iqbal and Iqbal, 2014). The human epidermal growth factor receptor (HER2), also known as HER2/neu, is one of the epidermal growth factor receptors (ErbB) tyrosine kinase receptors (Type I tyrosine kinase receptors). This gene is situated at 17q12 on chromosome 17 (Krishnamurti and Silverman, 2014). HER2 is an oncogene that has a role in cell proliferation and differentiation (Iqbal and Iqbal, 2014). It was involved in the pathogenesis of breast cancer (Ishikawa et al., 2014). HER2 amplification and/or overexpression in breast cancer patients related to aggressive behavior in breast cancer patients, including, poor prognosis, a short disease-free period, and a short survival period (Burstein, 2005; Wang et al., 2015; Cong et al., 2020). In Thailand, the HER2 status of all breast cancer cases will be evaluated before receiving therapy. The evaluation of HER2/neu involves employing two distinct methods: immunohistochemistry (IHC) to detect protein expression, and fluorescence in situ hybridization (FISH) or dual in situ hybridization (DISH) to measure gene amplification (Gordian-Arroyo et al., 2019). The IHC scored membrane HER2 level as 1+, 2+ and 3+ whereas the ISH measured HER2 amplification as positive and negative. Both approaches followed the 2018 recommendations of the American Society of Clinical Oncologists and College of American Pathologists (ASCO/CAP) (Gordian-Arroyo et al., 2019). In cases of HER2 amplification positivity, the National Health Society of Thailand (NHSO) recommends a targeted therapy regimen including anti-HER2 family medications such as Pertuzumab and Trastuzumab (Lewis Phillips et al., 2008; Gianni et al., 2011; Higgins and Baselga, 2011; Den Hollander et al., 2013; Doval et al., 2021). In practice, all patients must first be screened for IHC. In the case of IHC 2+ or 3+, DISH will be done. Since IHC is cheap, quicker, and simpler, ISH is twenty times more complicated, time-consuming, and costly. In situations where there is a lack of ISH confirmation, as often found in developing countries, the results can be enhanced by exclusively depending on IHC. Consequently, this study aimed to determine the concordance rates between IHC scores 2+ and 3+ and HER2 gene amplification. The findings revealed that IHC techniques with a score of 3+ demonstrate comparable results to HER2 amplification, suggesting their potential utility alone without an ISH result.
Sample Recruitment The research utilized formalin-fixed paraffin-embedded (FFPE) tissue blocks obtained from breast cancer tumor cases that had undergone both HER2 IHC and HER2 DISH procedures. These FFPE samples were derived from biopsy specimens taken during the preoperative treatment stage of patients diagnosed with breast cancer. The patients included in the study had primary tumors and had not undergone any previous radiation therapy or chemotherapy. Exclusion criteria were applied for cases with low amounts of pathologic tissue and a lack of clinical data. The diagnosis of invasive ductal carcinoma, histological subtype, estrogen receptor (ER), and progesterone receptor (PR) status were confirmed by KS and SC. display examples of hematoxylin and eosin (H&E) staining. Clinical data were obtained from the patients’ clinical chart records, and all the relevant clinical and histological information is presented in . A total of 510 breast cancer cases were initially recruited from the Department of Pathology at Rajavithi Hospital in Bangkok, Thailand, between January 1st, 2022, and May 31st, 2023. After careful selection, 156 breast cancer tissue samples were included for analysis. This hospital-based study protocol was approved by the Institutional Review Board of Rajavithi in Bangkok, Thailand (IRB no. 009/2566), and written informed consent was obtained from all participating patients. IHC For IHC, the HER2/neu primary antibodies (4B5) were used. The FFPE blocks were cut into sections with a thickness of 3 µm. The slides were then stained with the HER2/neu (4B5) primary monoclonal antibody (6 µg/100 µl, Ventana Medical Systems, catalog number 790-2991) using an automated slide strainer, the BenchMark Ultra (Ventana Medical Systems, Inc., Arizona, United States). The staining process was conducted at 37°C for 16 minutes. The detection of the HER2 protein was performed using the Ultraview Universal DAB Detection Kit (Ventana-Roche Diagnostics, Meylan, France). Subsequently, the slides were counterstained with Hematoxylin II® (ab245880, Abcam, United Kingdom) for 8 minutes and Bluing Reagent® for 4 minutes (BR-OT, Biogenost, Croatia, EU). To ensure the accuracy and validity of the staining procedure, positive controls consisting of breast tissue samples known to be HER2-positive were included in each examination. The staining scores were determined by evaluating membrane staining in tumor cells. Based on the 2018 ASCO/CAP criteria, the IHC scores were classified as negative (score of 0 or 1+), equivocal (score of 2+), or positive (score of 3+) (Gordian-Arroyo et al., 2019). KS and SC conducted blind evaluations and provided scores. illustrates an example of an IHC score of 3+, while demonstrates an example of an IHC score of 2+. DISH The FFPE blocks were cut into sections with a thickness of 3 mm. The HER2 gene amplification was determined using the inform HER2 DISH DNA probed cocktail assay (catalog number 800-6043) on the automated VENTANA BenchMark ULTRA platform (Ventana Medical Systems Inc., Tucson, AZ, USA). The procedure involved several steps, including deparaffinization, tissue adjustment, proteinase treatment, and DNA denaturation by heating at 80°C for 8 minutes. Subsequently, the slides were incubated with the VENTANA Silver ISH DNP Detection Kit for HER2 copies (black color) for 48 minutes, followed by the VENTANA Red ISH DIG Detection Kit for chromosome 17 (red color) for 56 minutes. Finally, the slides were counterstained with Hematoxylin II® (ab245880, Abcam, United Kingdom) for 8 minutes and Bluing Reagent® (BR-OT, Biogenost, Croatia, EU) for 8 minutes to enhance visibility and provide contrast. The DISH analysis was conducted by ST and KS under a microscope. In (DISH positive) and 1F (DISH negative), the red signal represents the probe targeting the chromosome 17 centromere (CEP17), serving as an internal control. The black signal corresponds to the HER2 probe on chromosome 17. The results were evaluated based on the ratio of HER2 signals to CEP17 signals and the average HER2 copy number in the cancer cells, following the criteria set by ASCO/CAP (Gordian-Arroyo et al., 2019). HER2 gene amplification was classified as “positive” if the HER2/CEP17 signal count ratio was 2.0 or greater, or if the ratio was less than 2.0 but the average number of HER2 signals per cell was 6.0 or higher. A score of “equivocal” was assigned if the HER2/CEP17 signal count ratio was less than 2.0, and the average number of HER2 signals per cell ranged from 4.0 to less than 6.0. A score of “negative” was given if the HER2/CEP17 signal count ratio was less than 2.0, and the average number of HER2 signals per cell was less than 4.0 (Nishimura et al., 2016). KS and SC carried out blind evaluations and provided scores. Statistics Analyses The statistical analysis was conducted using version 22.0 of the SPSS software (IBM Corp., Armonk, N.Y., USA). The evaluation of the diagnostic test for IHC positivity (score of 3+) was performed using HER2 amplification as a gold standard, following the 2018 ASCO/CAP guidelines. With a 95% confidence interval, the following parameters were calculated: sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), positive likelihood ratio (LR+), and negative likelihood ratio (LR-) to measure the diagnostic test’s accuracy and reliability.
Clinical Characteristics A total of 510 breast cancer carcinoma tissue samples obtained from Rajavithi Hospital in Bangkok, Thailand, underwent HER2 IHC testing. Among these samples, only those with HER2 IHC scores of 2+ and 3+ were selected to undergo further investigation using DISH. Ultimately, 156 cases met the criteria and were included for further analysis. Out of the 156 cases, 58 samples had an equivocal IHC score of 2+ (indicating equivocal HER2 protein expression), while 98 samples showed a positive IHC score of 3+ (indicating strong HER2 protein expression). These samples were chosen for further investigation, suggesting a focus on cases with significant or uncertain levels of HER2 protein expression for subsequent analysis using DISH. presents a summary of the clinical and pathological findings from the patient cohort. A total of 156 Thai patients diagnosed with breast cancer participated in this study. The patients’ median age was 54 years, with an interquartile range (IQR) of 45 to 63 years. Lesions were located on the right side in 45.5% of cases and on the left side in 54.4% of cases. The median tumor size was 3 cm, with an IQR of 1.8 to 4.5 cm. Concerning histological subtypes, the majority (91%) of cases were classified as invasive ductal carcinoma. Histological grading revealed that grade 2 accounted for the largest proportion (37.8%), followed closely by grade 3 (39.7%), and grade 1 (4.49%). In terms of receptor status, the distribution of ER and PR was as follows: ER+PR+ (44.2%), ER-PR- (34%), ER+PR- (16%), and ER-PR+ (5.8%). HER2 IHC and DISH results A total of 510 cases of invasive ductal carcinoma were examined by IHC. Among them, 58 cases were classified as HER2 IHC 2+, and 98 cases were categorized as HER2 IHC 3+. This distribution is illustrated in , representing HER2 IHC equivocal (score 2+) and HER2 IHC positive (score 3+) cases, respectively. Subsequently, the 156 cases underwent DISH analysis, with the results indicating the average HER2 copy number according to the ASCO/CAP 2018 criteria. Following this analysis, 48 cases were categorized as having positive HER2 amplification, while 108 cases showed negative HER2 amplification. These categories are visualized in , representing positive and negative HER2 amplification, respectively. To provide a contextual visualization, present the associated H&E stain images of the cases being discussed. Diagnostic value of HER2 IHC We used HER2 amplification from the ASCO/CAP 2018 guidelines as a gold standard. There are 156 cases in total, of which 95 are true positive (HER2 IHC3+/positive HER2 amplification), 45 are true negative (HER2 IHC2+/negative HER2 amplification), three are false positive (HER2 IHC3+/negative HER2 amplification), and thirteen are false negative (HER2 IHC2+/positive HER2 amplification). After calculating the diagnostic test for the HER2 IHC method, presents all diagnostic values. The sensitivity, specificity, positive predictive values, negative predictive values, and likelihood ratio positive were very high (87.06%, 93.75%, 77.59%, 96.94%, and 14.07, respectively). In contrast, the likelihood ratio negative was very low (0.13), indicating a strong capability to use HER2 IHC as a screening and diagnostic test. Overall, the accuracy of HER2 IHC in diagnosing HER2 amplification was also high (89.74%), suggesting that we can use the IHC technique as a comparable alternative to DISH to diagnose HER2 amplification. When examining subgroup criteria such as age, tumor laterality, tumor size, histological subtype, histological grade, ER, and PR statuses, the IHC HER2 technique demonstrates remarkable effectiveness in terms of sensitivity and specificity, ranging from 75% to 100% across all subgroup analyses . Moreover, the IHC HER2 approach has a very high efficiency, especially in cases of metastatic breast cancer, with 96 % sensitivity and 95 % specificity. Following the likelihood ratio calculation, the HER2 IHC screening was employed to estimate the probability of the individual having the HER2 amplification case. For HER 2 IHC score 3+, the posterior probability of DISH positivity is 97% (with a 95% confidence interval of 91% to 99%). On the other hand, for HER2 IHC score 2+, the posterior probability of DISH positivity is 23% (with a 95% confidence interval of 15% to 32%), as illustrated in .
IHC assesses the presence of the HER2 protein on tumor cell surfaces. This cost-effective method is readily available in local pathology labs. However, it has certain limitations, including factors influencing results, subjective interpretation, and a notable false positive rate (Pauletti et al., 2000; Tubbs et al., 2001; De Matos et al., 2010). Conversely, FISH or DISH differentiates HER2 copy counts by employing fluorescent-labeled oligonucleotide probes that adhere to precise DNA sections. This genetic methodology yields dependable outcomes with reduced susceptibility to discrepancies among observers. Nonetheless, it constitutes a fee-based analysis and currently entails notable expenses. Furthermore, the turnaround time for results is longer in contrast to IHC. Currently, in Thailand, targeted therapies like Trastuzumab are employed for treating breast cancer patients. To be eligible for such therapies, patients must have an equivocal (score 2+) or positive (score 3+) HER 2 tests through IHC, which should be confirmed by an ISH test like FISH or DISH with a positive outcome. One limitation of this study is that the IHC scores for HER2 can exhibit variability across different laboratories. This technique can vary due to factors such as the availability of commercial primary antibodies, the time of tissue fixation, and the level of expertise in interpreting HER2 immunostaining (Magaki et al., 2019). For our study, we employed the pathological laboratory at Rajavithi Hospital. The inter-laboratory and intra-laboratory control of the machines and techniques is regularly supervised by the Royal College of Pathologists of Thailand. Using IHC to detect the HER2 gene yielded a positive result (score 3+) with a specificity of 93.75 %, which is deemed satisfactory. Additionally, it showed high specificity in some clinicopathological subgroups. Previous studies (Gown et al., 2008; Nitta et al., 2008) also demonstrated a strong agreement between IHC and ISH, with an average agreement rate of over 90%. The high specificity of the IHC technique may be due to the revised criteria for reporting and interpreting HER2 IHC by the ASCO/CAP in 2018 (Pasricha et al., 2020). Before 2013, it was not clear what HER2 staining in IHC meant (score 2+). This meant that the tumor cells were slightly to moderately stained, but the cell membrane was not completely stained. But in 2013, the criteria changed to include cancer cells with light to moderately stained areas around the cell membrane. In 2018, the equivocal classification was taken out of in situ hybridization reporting, and only positive and negative classifications were used. Because of these changes, there was less confusing reporting and more accurate reporting ultimately (Gordian-Arroyo et al., 2019; Pasricha et al., 2020). Anti-Her2 therapies are extensively used and are especially advantageous in situations of breast cancer with negative ER and PR. HER2 status is of utmost clinical significance especially in the case with It serves as a crucial marker for determining whether breast cancer patients should receive trastuzumab, a targeted therapy. False-negative results in HER2-negative breast cancer patients may lead to the omission of targeted therapy, thereby depriving these patients of potentially beneficial treatment. On the other hand, false-positive results can also pose challenges, as treating too many HER2-negative patients with trastuzumab can result in significant side effects and unnecessary resource waste. Furthermore, anti-HER2 medication is a targeted therapy that is effective in treating HER2-positive metastatic breast cancer. Our findings indicate that the IHC HER2 approach is highly effective in detecting HER2-positive metastatic cases, underscoring the benefits of this technique. This study demonstrated the agreement between IHC and DISH techniques. The utilization of IHC alone could assist healthcare professionals in the timely and appropriate administration of trastuzumab. This knowledge holds relevance, especially in resource-constrained developing nations. Furthermore, Her2 amplification and/or overexpression have been noted in other malignancies (Menard et al., 2001), implying potential applicability across diverse tumors. We also found a few incidences of false positives and false negatives from the IHC technique. These can occur at any stage of IHC or tissue fixation, processing, or artifact formation. To combat this, we are attempting to implement more quality controls, such as protocol, antibody, and lab setting. Furthermore, it is crucial to strengthen these findings through the inclusion of larger sample sizes and more diverse cohorts. Expanding our understanding of breast cancer pathogenesis should also involve the incorporation of additional biomarkers such as PIK3CA and P53 mutations, as highlighted in previous studies (Ogeni et al., 2021; Ali et al., 2022). This approach will provide deeper insights into the administration of targeted therapies, leading to improved patient outcomes and a more effective allocation of healthcare resources.
ST, KS and NK designed the study as well as analyzed, and interpreted data. ST, SC and KS performed the experiments. ST drafted manuscript. NK reviewed and edited the manuscript. All authors read and approved the final manuscript.
|
Biopreservative and Anti-Mycotoxigenic Potentials of | 3ba869b8-ac9e-45ed-948b-068c8cada064 | 10891891 | Microbiology[mh] | Food consumption is intended to deliver required nutrients, while functional foods provide additional properties that contribute positively to health, especially in preventing various diseases and disorders . Increasing demands for natural and chemical-free products have led food research to search for an alternative technique for food biopreservation with novel strategies , and extending shelf life remains challenging . The genus Lactobacillus is essential to modern food technologies for its potential to replace antibiotic growth promoters . Various applications have recently been used to produce dairy products that resist mycotoxicological contamination and can reduce dairy product contamination [ , , ]. The antibacterial efficacy of Lactobacillus and its bacteriocin (ribosomal peptides or proteins synthesized by bacteria) is a promising alternative to natural preservatives that prevent or reduce the growth of foodborne pathogen S. aureus [ , , ]. Furthermore, Lactobacillus bacteria suppressed the conidial germination and mycelial growth of Aspergillus parasiticus and Penicillium chrysogenum . There are opportunities for future research to prevent fungal growth and eliminate mycotoxins from food or their transformation into less dangerous compounds using the strains of lactic acid bacteria . Natural contaminants such as mycotoxins, are a significant food safety concern, considered the main hazard in food products, particularly aflatoxins (AFB 1 and AFM 1 ) classified in Group 1 (human carcinogen) by the International Agency for Research on Cancer . Several applications were recorded by efficiently reducing the aflatoxin contamination using antagonism impact . Otherwise, the application of natural extracts rich in bioactive molecules can reduce these types of hazards . In addition to antifungal potentials, the Lactobacillus bacterial strain showed many anti-mycotoxigenic possibilities to be widely used in food and feed commodities to either inhibit the production of mycotoxins or reduce the quantity of already produced mycotoxins through physical and chemical binding involving the use of acidification and absorbents with a multi-mycotoxin binding capacity . White cheese is the dominating category and popular choice, with approximately 32% of the cheese market in Egypt ; therefore, it can be considered the perfect cheese product for producing probiotic cheese as a delivery system for viable probiotic microorganisms. Additionally, the consumption of probiotic cheese has been found to attenuate exercise-induced immune suppression, improve symptoms of constipation, and improve body mass index and blood pressure indices . The shelf life of white cheese was reportedly found to be between days 14 and 28 as white cheese generally ages slowly, while the microbiota agents can potentially prolong cheese shelf life . However, some investigations focused on the metabolomic benefits of other milk sources . Nevertheless, cheese manufacturing is carried out through several steps, including ripening, storage, and handling, and several issues could occur, such as microbial contamination. A novel strain of Lacticaseibacillus MG847589 ( Lb. paracasei MG847589), isolated in previous work from local dairy products, has a bioactive metabolite (bacteriocin) that has a potential application in cheese production. This study aimed to produce soft white cheese fortified with this strain ( Lb. paracasei MG847589), its bacteriocin, and their combination to evaluate their biopreservative and anti-mycotoxigenic potentials for prolonged shelf life and safe food applications. Also, this study aimed to evaluate this strain’s functionality to improve cheese products’ safety and preservation qualities, such as reducing contamination levels with fungi that produce mycotoxins. The effects of these fortifications on physiochemical, microbial, texture, microstructure, and sensory properties were studied.
2.1. Physicochemical Characteristics of Functional White Cheese Changes in the mean values of moisture, protein, fat, and fiber in dry matter (DM) are presented in ( ). All parameters ranged in levels usually observed in soft white cheeses . All the cheese treatments did not affect the moisture, total protein, fiber, and fat content. These results agree with previous studies in which various adjunct cultures were used in white cheeses . The pH and lactic acid were found at levels usually observed in soft white cheeses . In general, soft white cheese production targets high acidification rates using starter cultures that can differ among producers or areas of milk origin . It was observed that Lb. paracasei and bacteriocin did not significantly affect the chemical composition of the cheese studied, except for the acidity values that were significantly higher in the presence of the probiotic Lb. paracasei MG847589 treatments: CP and CPB. A similar observation was reported by Allam et al. . The sensory assessment of soft white cheese products is shown in . All sensory evaluation parameters were affected by and reflected panelists’ preference for CPB, followed by CP and CB. These results are correlated with texture analyses and indicated that increased hardness of the products fortified with probiotics or bacteriocin positively affected their sensory properties. The enhanced microstructure of CPB pronounced in ( ) was reflected in texture scores. Sensory perception of innovative products is crucial as it is one of the keys to the widespread flavorful and wholesome image that dairy foods continue to enjoy with the consumer. Consequently, sensory measurement is often the final step in many experiments or applications for quality or consistency evaluation . Color analyses indicated that compared with control cheese, cheese with probiotics (CP), bacteriocin (CB), and probiotics and bacteriocin (CPB) did not significantly affect cheese lightness (L), yellowness (b), or redness (a). However, CP tended to be slightly yellowish, as shown in ( ), exhibiting soft white cheese products. Sensory properties illustrated in showed that CPB color was preferable. Similar observations were recorded for probiotic cheese applying two lactobacilli strains . 2.2. Microbiological Analysis of Cheese during Maturation and Storage Microbiological analyses of the cheese samples were carried out during cold storage for different microbial groups when fresh (1 day) and after 15, 30, and 45 days ( ). Fortification with the probiotic strain, bacteriocin, or their mixture affected the Lactobacilli counts significantly ( p < 0.05) compared to the control samples. In all cheese samples, coliforms, yeasts, and mold were not detected during storage except on the 30th and 45th day of storage for control and the 45th day of storage for probiotic treatment. Adjunct probiotic cultures were reported to have the ability to reduce coliforms during cheese maturation faster than in cheeses produced with a single starter culture [ , , ]. In , the counts of cocci did not significantly differ among all samples during cheese storage. On the other hand, the addition of probiotics significantly increased the population of Lactobacilli ( p < 0.05) along with providing a healthy character to the cheese samples since the Lactobacilli population was maintained at high levels (>10.6 log 10 CFU/g) during 45 days of storage. The cheese with probiotics and bacteriocin (CPB) significantly affected the Lactobacilli counts in cheese (8.42 to 7.46 log 10 CFU/g) compared to the cheese with probiotics (CP) (8.17 to 7.60 log 10 CFU/g). Lactobacilli counts most likely originated from starter and probiotic cultures but also from milk non-starter cultures that survived after pasteurization . The decreased number of lactobacilli during ripening and storage may be due to low pH, high salt content, lack of fermentative sugars, or possible bacteriocin production. 2.3. Texture Profile Analyses (TPA) Texture profile analyses of functional soft white cheese are illustrated in . Comparing the three treatments with control (CS), the results showed that the highest hardness values were observed with CPB, followed by CP, CS, and then CB (3988.03, 3357.73, 2648.73, 2525.7 g, respectively) in cycle one. CP treatment showed higher adhesive force, adhesiveness, and springiness (378.17 g, 378.17 mJ, and 6.71 mm, respectively). Applying bacteriocin in CB significantly decreased the hardness of cycle 1 and ycle 2 (2525.73 g and 2016.03 g, respectively). The reduction in hardness in soft cheese with bacteriocin may be related to moisture content (64.87%), which acts as a plasticizer in the protein matrix. A similar observation was reported by Zaky and Mahmoud . 2.4. Microstructure of Cheese Samples Scanning electron micrographs of the cross-section in soft white cheese products are presented in . Compared to control soft white cheese ( A), cheese with Lb . paracasei (CP) ( B) showed a porous structure that may be reflected in texture analyses showing the highest adhesiveness ( ). Fewer pores were observed in CB ( C), and the smooth structure reflected less hardness ( ). Cheese with probiotics and bacteriocin (CPB) ( C) showed an intact structure, as low moisture and high acidity might cause the highest hardness and adhesive force ( ). Microstructure differences were reflected significantly in the panelist’s evaluation to prefer CPB hard texture ( ). These observations were noticed as well in the appearance of soft white cheese products ( ). Application of probiotics, bacteriocin, or their mixture to soft cheese is recommended for the maintenance of sensory properties in addition to microbiological safety . 2.5. Inhibitory Effects of Lb. paracasei MG847589 against Pathogenic Microorganisms The inhibition effects caused by Lb. paracasei MG847589 against S. aureus are shown in ( ). The cheese fortification with Lb. paracasei MG847589 (CPS) showed an inhibition effect against S. aureus , decreasing its colonies from 6.54 to 3.32 log 10 CFU/g after 28 days of storage ( p > 0.05); also, the cheese fortification with Lb. paracasei MG847589 and bacteriocin (CPBS) showed an inhibition effect against S. aureus , from 6.52 to 2.10 log 10 CFU/g after 28 days of storage ( p > 0.05). L. casei subsp. paracasei was reported to exhibit inhibition effects, at the rates of 7.87% and 23.63%, against S. aureus on the 14th and 21st day of storage, respectively . 2.6. Inhibitory Effect of Lb. paracasei MG847589 against Pathogenic Bacteria The inhibition effects caused by Lb. paracasei MG847589 against S. aureus are shown in ( ). The cheese fortification with Lb. paracasei MG847589 (CPS) showed an inhibition effect against S. aureus , decreasing its colonies from 6.54 to 3.32 log 10 CFU/g after 28 days of storage ( p > 0.05); also, the cheese fortification with Lb. paracasei MG847589 and bacteriocin (CPBS) showed an inhibition effect against S. aureus , from 6.52 to 2.10 log 10 CFU/g after 28 days of storage ( p > 0.05). L. casei subsp. paracasei was reported to exhibit inhibition effects, at the rates of 7.87% and 23.63%, against S. aureus on the 14th and 21st day of storage, respectively . The presence of Lb. paracasei MG847589 in CPA and CPP treatments succeeded in decreasing the A. parasiticus and P. chrysogenum counts from 5.18 to 3.33 and 5.20 to 3.55 log 10 CFU/g, respectively, after 45 days of storage ( p > 0.05), indicating that the probiotic culture had an inhibitory effect against these fungal pathogens ( ). After 45 days of storage, A. parasiticus and P. chrysogenum counts decreased from 5.06 to 3.03 and 5.11 to 2.86 log 10 CFU/g in treatments CPBA and CPBP ( Lb. paracasei MG847589 + bacteriocin), respectively ( ). The ability of Lb. paracasei to inhibit A. parasiticus ITEM11 was reported by Shehata et al. . The observed reduction in food pathogens in formulations fortified with Lb. paracasei MG847589 or its bacteriocin, compared to the negative control after 45 days of storage, can be relied on for the production of a series of antimicrobial compounds such as lactic acid, organic acids, hydrogen peroxide, ethanol, and diacetyl, which can inhibit pathogenic bacteria and fungi. Furthermore, this strain can produce bacteriocin with a molecular weight of 2611 Da and peptides that show anti-Gram-positive and anti-Gram-negative bactericidal activity . Consequently, probiotic strains that exhibit antimicrobial activity against spoilage or pathogenic bacteria within the matrix in which they are incorporated represent an interest for industrial application, as in addition to performing their probiotic effects, they contribute to extended products’ shelf life . 2.7. Antimycotoxigenic Effect of L. paracasei MG847589 The impact of applied treatment in manufactured cheese was also evaluated for the detoxification effect since AFM 1 contaminated the raw materials or when the cheese samples were exposed to cross-contaminated by AFB 1 , as shown in and . The result exhibited that, the increment in incubation time for the exposed spiked toxin to cheese treated by probiotic, its metabolite bacteriocin, or their mixture reflected increased detoxification potency ( ). The degradation ratio in AFM 1 -contaminated samples was recorded more efficiently than the reduction reported for the AFB 1 -spiked samples. After 48 hrs of incubation of the toxin within probiotic, bacteriocin, or their mixture, the detoxification ratio spanned between 63% and 69% for the AFB 1 contamination, and between 64% and 71% for the AFM 1 -spiked in the cheese samples. Previous studies referred to the better impact of bacteriocin as a probiotic metabolite to access aflatoxin detoxification [ , , ]. Moreover, it was reported that several probiotics can reduce aflatoxin contamination through various mechanisms . The results reflected the uniqueness of the applied strain to possess a detoxification potency, represented by the so-close efficiency of the bacterial cells and their metabolite bacteriocin. These results indicate the possibility of utilizing L. paracasei as a common starter in the predicted contaminated raw materials, which may be used for fresh or semi-fresh products; this step will provide an additive characteristic regarding the safety of the final dairy product. Bacterial metabolites, particularly those generated by probiotic bacteria, can potentially contribute to the decontamination of aflatoxins via numerous approaches. The results exhibit variations in applying entire bacteria or metabolites in the targeted products . Introducing bacterial cells into food items was crucial in influencing mycotoxicological fungi’s development and inhibiting mycotoxins’ formation. Certain beneficial bacteria can outcompete fungi that produce aflatoxin to acquire nutrients and occupy physical space. Through the process of colonizing similar ecological niches, these bacteria can restrict the development and propagation of toxin-producing fungi, resulting in a reduction in aflatoxin contamination . The abovementioned phenomenon is often referred to as competitive exclusion. The second mechanism could be linked to the antagonism phenomena. Certain bacterial species can synthesize compounds with antifungal characteristics, impeding fungi proliferation that creates aflatoxins . The potential impact of these metabolites includes the disruption of fungal cell membranes, interference with their metabolic activities, and the production of enzymes that break down aflatoxins . Several bacterial species have been shown to exhibit enzymes that can degrade aflatoxins into molecules that are either less toxic or non-toxic . The enzymatic activity can mitigate the toxicity of food and feed items that have been contaminated. It is plausible that beneficial bacteria have enzyme pathways capable of altering aflatoxins into less harmful variants or eliminating their toxicity . These routes could be used to improve the safety of food and feed products. Specific bacterial metabolites can potentially adsorb aflatoxins, forming a binding interaction that hinders their absorption in vitro or in vivo inside the gastrointestinal tracts of animals or humans . The study consistently identifies certain strains of bacteria and their metabolites that can decrease aflatoxin exposure successfully. Nevertheless, it is crucial to acknowledge that the effectiveness of using bacterial metabolites for aflatoxin decontamination may differ depending on several aspects, including the particular bacterial strains used, environmental circumstances, and the extent of aflatoxin contamination.
Changes in the mean values of moisture, protein, fat, and fiber in dry matter (DM) are presented in ( ). All parameters ranged in levels usually observed in soft white cheeses . All the cheese treatments did not affect the moisture, total protein, fiber, and fat content. These results agree with previous studies in which various adjunct cultures were used in white cheeses . The pH and lactic acid were found at levels usually observed in soft white cheeses . In general, soft white cheese production targets high acidification rates using starter cultures that can differ among producers or areas of milk origin . It was observed that Lb. paracasei and bacteriocin did not significantly affect the chemical composition of the cheese studied, except for the acidity values that were significantly higher in the presence of the probiotic Lb. paracasei MG847589 treatments: CP and CPB. A similar observation was reported by Allam et al. . The sensory assessment of soft white cheese products is shown in . All sensory evaluation parameters were affected by and reflected panelists’ preference for CPB, followed by CP and CB. These results are correlated with texture analyses and indicated that increased hardness of the products fortified with probiotics or bacteriocin positively affected their sensory properties. The enhanced microstructure of CPB pronounced in ( ) was reflected in texture scores. Sensory perception of innovative products is crucial as it is one of the keys to the widespread flavorful and wholesome image that dairy foods continue to enjoy with the consumer. Consequently, sensory measurement is often the final step in many experiments or applications for quality or consistency evaluation . Color analyses indicated that compared with control cheese, cheese with probiotics (CP), bacteriocin (CB), and probiotics and bacteriocin (CPB) did not significantly affect cheese lightness (L), yellowness (b), or redness (a). However, CP tended to be slightly yellowish, as shown in ( ), exhibiting soft white cheese products. Sensory properties illustrated in showed that CPB color was preferable. Similar observations were recorded for probiotic cheese applying two lactobacilli strains .
Microbiological analyses of the cheese samples were carried out during cold storage for different microbial groups when fresh (1 day) and after 15, 30, and 45 days ( ). Fortification with the probiotic strain, bacteriocin, or their mixture affected the Lactobacilli counts significantly ( p < 0.05) compared to the control samples. In all cheese samples, coliforms, yeasts, and mold were not detected during storage except on the 30th and 45th day of storage for control and the 45th day of storage for probiotic treatment. Adjunct probiotic cultures were reported to have the ability to reduce coliforms during cheese maturation faster than in cheeses produced with a single starter culture [ , , ]. In , the counts of cocci did not significantly differ among all samples during cheese storage. On the other hand, the addition of probiotics significantly increased the population of Lactobacilli ( p < 0.05) along with providing a healthy character to the cheese samples since the Lactobacilli population was maintained at high levels (>10.6 log 10 CFU/g) during 45 days of storage. The cheese with probiotics and bacteriocin (CPB) significantly affected the Lactobacilli counts in cheese (8.42 to 7.46 log 10 CFU/g) compared to the cheese with probiotics (CP) (8.17 to 7.60 log 10 CFU/g). Lactobacilli counts most likely originated from starter and probiotic cultures but also from milk non-starter cultures that survived after pasteurization . The decreased number of lactobacilli during ripening and storage may be due to low pH, high salt content, lack of fermentative sugars, or possible bacteriocin production.
Texture profile analyses of functional soft white cheese are illustrated in . Comparing the three treatments with control (CS), the results showed that the highest hardness values were observed with CPB, followed by CP, CS, and then CB (3988.03, 3357.73, 2648.73, 2525.7 g, respectively) in cycle one. CP treatment showed higher adhesive force, adhesiveness, and springiness (378.17 g, 378.17 mJ, and 6.71 mm, respectively). Applying bacteriocin in CB significantly decreased the hardness of cycle 1 and ycle 2 (2525.73 g and 2016.03 g, respectively). The reduction in hardness in soft cheese with bacteriocin may be related to moisture content (64.87%), which acts as a plasticizer in the protein matrix. A similar observation was reported by Zaky and Mahmoud .
Scanning electron micrographs of the cross-section in soft white cheese products are presented in . Compared to control soft white cheese ( A), cheese with Lb . paracasei (CP) ( B) showed a porous structure that may be reflected in texture analyses showing the highest adhesiveness ( ). Fewer pores were observed in CB ( C), and the smooth structure reflected less hardness ( ). Cheese with probiotics and bacteriocin (CPB) ( C) showed an intact structure, as low moisture and high acidity might cause the highest hardness and adhesive force ( ). Microstructure differences were reflected significantly in the panelist’s evaluation to prefer CPB hard texture ( ). These observations were noticed as well in the appearance of soft white cheese products ( ). Application of probiotics, bacteriocin, or their mixture to soft cheese is recommended for the maintenance of sensory properties in addition to microbiological safety .
The inhibition effects caused by Lb. paracasei MG847589 against S. aureus are shown in ( ). The cheese fortification with Lb. paracasei MG847589 (CPS) showed an inhibition effect against S. aureus , decreasing its colonies from 6.54 to 3.32 log 10 CFU/g after 28 days of storage ( p > 0.05); also, the cheese fortification with Lb. paracasei MG847589 and bacteriocin (CPBS) showed an inhibition effect against S. aureus , from 6.52 to 2.10 log 10 CFU/g after 28 days of storage ( p > 0.05). L. casei subsp. paracasei was reported to exhibit inhibition effects, at the rates of 7.87% and 23.63%, against S. aureus on the 14th and 21st day of storage, respectively .
The inhibition effects caused by Lb. paracasei MG847589 against S. aureus are shown in ( ). The cheese fortification with Lb. paracasei MG847589 (CPS) showed an inhibition effect against S. aureus , decreasing its colonies from 6.54 to 3.32 log 10 CFU/g after 28 days of storage ( p > 0.05); also, the cheese fortification with Lb. paracasei MG847589 and bacteriocin (CPBS) showed an inhibition effect against S. aureus , from 6.52 to 2.10 log 10 CFU/g after 28 days of storage ( p > 0.05). L. casei subsp. paracasei was reported to exhibit inhibition effects, at the rates of 7.87% and 23.63%, against S. aureus on the 14th and 21st day of storage, respectively . The presence of Lb. paracasei MG847589 in CPA and CPP treatments succeeded in decreasing the A. parasiticus and P. chrysogenum counts from 5.18 to 3.33 and 5.20 to 3.55 log 10 CFU/g, respectively, after 45 days of storage ( p > 0.05), indicating that the probiotic culture had an inhibitory effect against these fungal pathogens ( ). After 45 days of storage, A. parasiticus and P. chrysogenum counts decreased from 5.06 to 3.03 and 5.11 to 2.86 log 10 CFU/g in treatments CPBA and CPBP ( Lb. paracasei MG847589 + bacteriocin), respectively ( ). The ability of Lb. paracasei to inhibit A. parasiticus ITEM11 was reported by Shehata et al. . The observed reduction in food pathogens in formulations fortified with Lb. paracasei MG847589 or its bacteriocin, compared to the negative control after 45 days of storage, can be relied on for the production of a series of antimicrobial compounds such as lactic acid, organic acids, hydrogen peroxide, ethanol, and diacetyl, which can inhibit pathogenic bacteria and fungi. Furthermore, this strain can produce bacteriocin with a molecular weight of 2611 Da and peptides that show anti-Gram-positive and anti-Gram-negative bactericidal activity . Consequently, probiotic strains that exhibit antimicrobial activity against spoilage or pathogenic bacteria within the matrix in which they are incorporated represent an interest for industrial application, as in addition to performing their probiotic effects, they contribute to extended products’ shelf life .
The impact of applied treatment in manufactured cheese was also evaluated for the detoxification effect since AFM 1 contaminated the raw materials or when the cheese samples were exposed to cross-contaminated by AFB 1 , as shown in and . The result exhibited that, the increment in incubation time for the exposed spiked toxin to cheese treated by probiotic, its metabolite bacteriocin, or their mixture reflected increased detoxification potency ( ). The degradation ratio in AFM 1 -contaminated samples was recorded more efficiently than the reduction reported for the AFB 1 -spiked samples. After 48 hrs of incubation of the toxin within probiotic, bacteriocin, or their mixture, the detoxification ratio spanned between 63% and 69% for the AFB 1 contamination, and between 64% and 71% for the AFM 1 -spiked in the cheese samples. Previous studies referred to the better impact of bacteriocin as a probiotic metabolite to access aflatoxin detoxification [ , , ]. Moreover, it was reported that several probiotics can reduce aflatoxin contamination through various mechanisms . The results reflected the uniqueness of the applied strain to possess a detoxification potency, represented by the so-close efficiency of the bacterial cells and their metabolite bacteriocin. These results indicate the possibility of utilizing L. paracasei as a common starter in the predicted contaminated raw materials, which may be used for fresh or semi-fresh products; this step will provide an additive characteristic regarding the safety of the final dairy product. Bacterial metabolites, particularly those generated by probiotic bacteria, can potentially contribute to the decontamination of aflatoxins via numerous approaches. The results exhibit variations in applying entire bacteria or metabolites in the targeted products . Introducing bacterial cells into food items was crucial in influencing mycotoxicological fungi’s development and inhibiting mycotoxins’ formation. Certain beneficial bacteria can outcompete fungi that produce aflatoxin to acquire nutrients and occupy physical space. Through the process of colonizing similar ecological niches, these bacteria can restrict the development and propagation of toxin-producing fungi, resulting in a reduction in aflatoxin contamination . The abovementioned phenomenon is often referred to as competitive exclusion. The second mechanism could be linked to the antagonism phenomena. Certain bacterial species can synthesize compounds with antifungal characteristics, impeding fungi proliferation that creates aflatoxins . The potential impact of these metabolites includes the disruption of fungal cell membranes, interference with their metabolic activities, and the production of enzymes that break down aflatoxins . Several bacterial species have been shown to exhibit enzymes that can degrade aflatoxins into molecules that are either less toxic or non-toxic . The enzymatic activity can mitigate the toxicity of food and feed items that have been contaminated. It is plausible that beneficial bacteria have enzyme pathways capable of altering aflatoxins into less harmful variants or eliminating their toxicity . These routes could be used to improve the safety of food and feed products. Specific bacterial metabolites can potentially adsorb aflatoxins, forming a binding interaction that hinders their absorption in vitro or in vivo inside the gastrointestinal tracts of animals or humans . The study consistently identifies certain strains of bacteria and their metabolites that can decrease aflatoxin exposure successfully. Nevertheless, it is crucial to acknowledge that the effectiveness of using bacterial metabolites for aflatoxin decontamination may differ depending on several aspects, including the particular bacterial strains used, environmental circumstances, and the extent of aflatoxin contamination.
Fortification with Lb. paracasei MG847589 increased acidity and microbial counts, which may affect the porous microstructure, while bacteriocin enhanced the microstructure to be intact. CPB showed a hard texture, while CB tended to be softer. Consequently, the sensory assessment reflected the panelists’ preference for CPB, which gained higher scores than the control (CS). Fortification with Lb. paracasei MG847589 and bacteriocin (CPB) showed inhibition effects against S. aureus, A. parasiticus , and P. chrysogenum ,—as reflected by their reduced counts—which indicates their preservative potentials. Additionally, CPB showed significant anti-mycotoxigenic effects against aflatoxin B 1 and M 1 . These potentials can extend shelf life, guarantee food safety, and encourage recommendations for fortification with both Lb. paracasei MG847589 and its bacteriocin as biopreservatives for many food applications.
4.1. Materials and Microorganisms Lactobacillus paracasei MG847589 [GenBank accession No. MG847589] was isolated from traditional Egyptian Karish cheese . The strain is currently preserved at −80 °C in 20% glycerol. Before inoculation, the strain was activated in de Man Rogosa and Sharpe (MRS) broth (37 °C/24 h). The commercial rennet enzyme and commercial starter culture Yo-Mix 495 were gifted by Dairy Pilot Plant, Alexandria University, Egypt. The milk protein (MPC), milk powder (RCM), and butter were purchased from the local market. Bacteriocin of the bacteria was extracted and purified as described before . 4.2. White Cheese Preparation White cheese was manufactured using the technique suggested by Tamime et al. , albeit with some modifications ( ). Target total solids were 38%, 29% protein, and 7% fat content in the standardized reconstituted milk. A laboratory homogenizer was utilized for the MPC and RCM blinding in water (20965 g force/6 min). The resultant was stood to age overnight (4 °C) to ensure that powders were evenly dispersed before pasteurization. The mixture was divided into three sections, each with a different type of cheese: a control cheese with commercial starter (CS, 1.81 × 10 9 CFU/mL); a probiotic cheese (CP, 1.34 × 10 9 CFU/mL) of L. paracasei MG847589; and a bacteriocin-supplemented cheese (CB, at 500 AU/mL). The fourth portion was a combination of probiotics and bacteriocin (CPB). The commercial starter (Yo-Mix 495) containing S. thermophilus and L. delbrueckii was re-activated in milk before being added to the mixture. The cheeses were then mixed and left undisturbed for two hours. shows the ingredients for producing white cheese (1 Kg). 4.3. Physicochemical Analysis The pH value of all the cheese samples produced was measured by immersing the electrode of a digital pH meter (ADWA AD1030, Inc., Romania) directly into the cheese samples. The titratable acidity (expressed as lactic acid per 100 g of cheese) was determined. The moisture content was determined by drying 5-gram samples in an oven (70 °C/24 h), while the fat and fiber contents were determined following AOAC protocol . The total nitrogen (TN) was determined following the Kjeldahl procedure and was expressed as crude protein on a dry weight basis. A tristimulus colorimeter (Smart Color Pro, USA) was utilized to determine the samples’ color characteristics. The color was measured using L, a, and b values, where L values range from 0 (black) to 100 (white), where positive values indicate redness, negative a values indicate greenness, positive b values indicate yellowness, and negative b values indicate blueness. The color analysis was conducted in triplicate, and the means ± SD were recorded. 4.4. Microbiological Profile Analysis of Cheese Representative samples of cheese weighing 10 g were analyzed at various time intervals (1st, 7th, 15th, 30th, and 45th days) throughout the storage period. The samples were blended with 90 mL of sterile saline (0.9% w / v ) solution. Microbiological tests for total aerobic mesophilic bacteria, Lactobacilli count, S. thermophiles , yeasts, and molds were performed according to the previous methodology . All cell counts were expressed as log 10 CFU/g of cheese. 4.5. Texture Profile Analyses (TPA) The texture profile analysis (TPA) was carried out using a texture analyzer (TA1000, Lab Pro (FTC TMS-Pro), USA) following the method proposed before . The TPA parameters, including peak force of the first compression (hardness cycle 1) (g), peak force of the second compression (hardness cycle 2) (g), adhesive forces, adhesiveness, resilience, springiness, and springiness index, were determined from force–time curves . Texture profile analyses (TPA) were carried out in triplicates on day one . 4.6. Scanning Electron Microscopy and Sensory Evaluation The cheese samples were prepared and fixed using glutaraldehyde solution (3%) as described before . Panelists (a group of 20 humans) conducted a sensory evaluation of cheese, as Allam et al. described. Sensory evaluation was conducted following institutional committee approval. The samples’ color, odor, taste, texture, appearance, and overall acceptability were evaluated using a scale of ten categories ranging from 1 (dislike) to 9 (like). For the scanning electron microscopy (SEM) inspection, samples were first given a sputter coating of gold ions using an Edwards model S 140A sputter coater to create a conducting medium. Sputtered materials were then scanned using a scanning electron microscope (SEM) with a JEOL Model JSM-T20. 4.7. Antimicrobial Assessment against Food Pathogens Approximately 100 g of cheese was divided into sterile plastic bottles (200 mL). Cheese samples were divided into four treatments for each pathogen. Following previous work, probiotic bacteria were inoculated (1 mL/100 g cheese) to provide a system containing 7 log 10 CFU/g of probiotic strain [ , , ]. For pathogens, 6.5 log 10 CFU/g of S. aureus , 5 log 10 CFU/g of A. parasiticus ITEM 698, and 5 log 10 CFU/g of P. chrysogenum ATCC 11709 were inoculated individually. Pathogen treatment groups are illustrated in ( ). Following inoculation, the electric mixer (Kenwood, UK) was used to shake all cheese samples (5 min). Afterward, they were stored (at 6 °C/45 days), resulting in 48 samples (3 pathogenic strains x 4 treatments x 4 storage time intervals). Viable cell counts were performed on each sample at 0, 15, 30, and 45 days of refrigerated storage. For the viable cell counts of fungi strains, potato dextrose agar (Sigma Aldrich, St. Louis, MO, USA) was used for 48 h/25 °C. For S. aureus , mannitol-sodium chloride-phenol red agar (Merck, Lowe, NJ, USA) was used for 24 h/37 °C. The results were expressed as means of log 10 CFU/g cheese. 4.8. Anti-Mycotoxigenic Assessment against Aflatoxins (AFB 1 and AFM 1 ) Certified vials of the AFB 1 and AFM 1 were utilized for spiked cheese (Sigma-Aldrich). The standards were dissolved in phosphate buffer saline (PBS, 400 ng/mL) and spiked in the targeted samples. The biopreservative activity of the MG847589 strain was estimated using white cheese as a food model. Samples were randomly assigned to one of four treatments, where different amounts of aflatoxins were applied ( ). The bacterial effectiveness and bacteriocin in reducing aflatoxin content were investigated against a control. Quantitative determination of AFs was conducted using the Agilent 1100 HPLC system. The mobile phase was methanol (1): acetonitrile (3): and water (6). The determination was achieved using the previously mentioned conditions . 4.9. Statistical Analysis The experiments were performed in triplicates and expressed in mean ± SD. The ANOVA with a general linear model was used to test for significance, and p -values of less than 0.05 were considered significant (using SPSS Ver.20).
Lactobacillus paracasei MG847589 [GenBank accession No. MG847589] was isolated from traditional Egyptian Karish cheese . The strain is currently preserved at −80 °C in 20% glycerol. Before inoculation, the strain was activated in de Man Rogosa and Sharpe (MRS) broth (37 °C/24 h). The commercial rennet enzyme and commercial starter culture Yo-Mix 495 were gifted by Dairy Pilot Plant, Alexandria University, Egypt. The milk protein (MPC), milk powder (RCM), and butter were purchased from the local market. Bacteriocin of the bacteria was extracted and purified as described before .
White cheese was manufactured using the technique suggested by Tamime et al. , albeit with some modifications ( ). Target total solids were 38%, 29% protein, and 7% fat content in the standardized reconstituted milk. A laboratory homogenizer was utilized for the MPC and RCM blinding in water (20965 g force/6 min). The resultant was stood to age overnight (4 °C) to ensure that powders were evenly dispersed before pasteurization. The mixture was divided into three sections, each with a different type of cheese: a control cheese with commercial starter (CS, 1.81 × 10 9 CFU/mL); a probiotic cheese (CP, 1.34 × 10 9 CFU/mL) of L. paracasei MG847589; and a bacteriocin-supplemented cheese (CB, at 500 AU/mL). The fourth portion was a combination of probiotics and bacteriocin (CPB). The commercial starter (Yo-Mix 495) containing S. thermophilus and L. delbrueckii was re-activated in milk before being added to the mixture. The cheeses were then mixed and left undisturbed for two hours. shows the ingredients for producing white cheese (1 Kg).
The pH value of all the cheese samples produced was measured by immersing the electrode of a digital pH meter (ADWA AD1030, Inc., Romania) directly into the cheese samples. The titratable acidity (expressed as lactic acid per 100 g of cheese) was determined. The moisture content was determined by drying 5-gram samples in an oven (70 °C/24 h), while the fat and fiber contents were determined following AOAC protocol . The total nitrogen (TN) was determined following the Kjeldahl procedure and was expressed as crude protein on a dry weight basis. A tristimulus colorimeter (Smart Color Pro, USA) was utilized to determine the samples’ color characteristics. The color was measured using L, a, and b values, where L values range from 0 (black) to 100 (white), where positive values indicate redness, negative a values indicate greenness, positive b values indicate yellowness, and negative b values indicate blueness. The color analysis was conducted in triplicate, and the means ± SD were recorded.
Representative samples of cheese weighing 10 g were analyzed at various time intervals (1st, 7th, 15th, 30th, and 45th days) throughout the storage period. The samples were blended with 90 mL of sterile saline (0.9% w / v ) solution. Microbiological tests for total aerobic mesophilic bacteria, Lactobacilli count, S. thermophiles , yeasts, and molds were performed according to the previous methodology . All cell counts were expressed as log 10 CFU/g of cheese.
The texture profile analysis (TPA) was carried out using a texture analyzer (TA1000, Lab Pro (FTC TMS-Pro), USA) following the method proposed before . The TPA parameters, including peak force of the first compression (hardness cycle 1) (g), peak force of the second compression (hardness cycle 2) (g), adhesive forces, adhesiveness, resilience, springiness, and springiness index, were determined from force–time curves . Texture profile analyses (TPA) were carried out in triplicates on day one .
The cheese samples were prepared and fixed using glutaraldehyde solution (3%) as described before . Panelists (a group of 20 humans) conducted a sensory evaluation of cheese, as Allam et al. described. Sensory evaluation was conducted following institutional committee approval. The samples’ color, odor, taste, texture, appearance, and overall acceptability were evaluated using a scale of ten categories ranging from 1 (dislike) to 9 (like). For the scanning electron microscopy (SEM) inspection, samples were first given a sputter coating of gold ions using an Edwards model S 140A sputter coater to create a conducting medium. Sputtered materials were then scanned using a scanning electron microscope (SEM) with a JEOL Model JSM-T20.
Approximately 100 g of cheese was divided into sterile plastic bottles (200 mL). Cheese samples were divided into four treatments for each pathogen. Following previous work, probiotic bacteria were inoculated (1 mL/100 g cheese) to provide a system containing 7 log 10 CFU/g of probiotic strain [ , , ]. For pathogens, 6.5 log 10 CFU/g of S. aureus , 5 log 10 CFU/g of A. parasiticus ITEM 698, and 5 log 10 CFU/g of P. chrysogenum ATCC 11709 were inoculated individually. Pathogen treatment groups are illustrated in ( ). Following inoculation, the electric mixer (Kenwood, UK) was used to shake all cheese samples (5 min). Afterward, they were stored (at 6 °C/45 days), resulting in 48 samples (3 pathogenic strains x 4 treatments x 4 storage time intervals). Viable cell counts were performed on each sample at 0, 15, 30, and 45 days of refrigerated storage. For the viable cell counts of fungi strains, potato dextrose agar (Sigma Aldrich, St. Louis, MO, USA) was used for 48 h/25 °C. For S. aureus , mannitol-sodium chloride-phenol red agar (Merck, Lowe, NJ, USA) was used for 24 h/37 °C. The results were expressed as means of log 10 CFU/g cheese.
1 and AFM 1 ) Certified vials of the AFB 1 and AFM 1 were utilized for spiked cheese (Sigma-Aldrich). The standards were dissolved in phosphate buffer saline (PBS, 400 ng/mL) and spiked in the targeted samples. The biopreservative activity of the MG847589 strain was estimated using white cheese as a food model. Samples were randomly assigned to one of four treatments, where different amounts of aflatoxins were applied ( ). The bacterial effectiveness and bacteriocin in reducing aflatoxin content were investigated against a control. Quantitative determination of AFs was conducted using the Agilent 1100 HPLC system. The mobile phase was methanol (1): acetonitrile (3): and water (6). The determination was achieved using the previously mentioned conditions .
The experiments were performed in triplicates and expressed in mean ± SD. The ANOVA with a general linear model was used to test for significance, and p -values of less than 0.05 were considered significant (using SPSS Ver.20).
|
Integrated analysis of rumen metabolomics and metataxonomics to understand changes in metabolic and microbial community in Korean native goats under heat stress | 92d395d5-8f4b-416f-9ac8-419d40486078 | 11682336 | Biochemistry[mh] | The heat stress (HS) environmental condition for ruminants is defined as the temperature humidity index (THI) , which is a combined function of the ambient temperature and relative humidity. A THI value between 70 and 74 indicates a potential HS environment for ruminants . The increased release of greenhouse gases from various sources, including agriculture and livestock, has resulted in global warming and increased the HS exposure time-period for ruminants . HS severely affects ruminants, because of its negative effects on feed intake , growth performance, reproduction , and product quality , . In addition, HS-exposed ruminants also experience down regulation of immune responses and frequently suffer from metabolic diseases caused by unusual concentrations of certain metabolites – . Among the methods for phenotyping organisms, metabolomics is a more comprehensive concept than genomics, proteomics, and/or transcriptomics . Additionally, it is crucial in systemic biology for determining metabolic diseases, finding potential biomarkers, and identifying novel metabolic pathways in clinical studies . Proton nuclear magnetic resonance H-NMR) spectroscopy is one of the most widely metabolomic platforms, because it requires minimal sample preparation; while gas and liquid chromatography-mass spectrometry platforms are more sensitive, the sample-preserving capability and highly reproducible quantifications of H-NMR serve to offset its lower sensitivity , . Numerous studies have investigated metabolite changes in biological samples of heat-stressed ruminants – . However, these have focused on large ruminants (dairy or beef Holstein, Jersey, Angus, etc.), and those on small ruminants (dairy or beef goats, sheep, etc.) are lacking. Goats and sheep play important roles in the economies of millions of people earning their livelihoods by rearing these animals under different weather conditions worldwide , . In Korea, goats are the second most important source of meat after Hanwoo cattle (Korean native cattle; Bos taurus coreanae ). Therefore, research on the prevention and diagnosis of HS in goats is essential. Metataxonomics allows a comprehensive analysis of the complex and diverse microbial communities that reside in the rumen and are essential for metabolism and overall health. The structure and function of the rumen microbial community are affected by numerous physical and chemical factors, including diet, feeding programs, animal phenotypes, and environmental factors such as HS , . HS can alter the microbial composition in the rumen, leading to physiological changes and consequently those associated with productivity . Recent research demonstrated that HS directly affects the composition and function of rumen microbial communities. However, these studies did not integrate the rumen and host metabolomes, which is crucial for a comprehensive understanding of rumen physiology , . Through examining the linkages between these metabolomes, the metabolic processes that occur in the rumen and how they are influenced by HS can be understood. Therefore, integrating rumen and host metabolomic data is necessary for completely understanding the effects of HS on ruminal physiology. We hypothesized that HS would alter both the rumen microbiome composition and metabolic profiles, leading to changes in host metabolism in Korean native goats ( Capra hircus coreanace ). Therefore, this study aimed to perform metabolomics studies using H-NMR spectroscopy for rumen fluid and serum and metataxonomic studies using 16S rRNA gene amplicon sequencing for rumen bacteria to understand the differential biological responses in Korean native goats under optimum temperature period (OTP) and high temperature period (HTP) conditions. This integrative analysis provides an in-depth understanding of the complex metabolomics, microbiomes, and their potential interactions in heat-stressed goats. Moreover, we suggest strategies to ameliorate under the HS, including changes in the microbial population and energy metabolic disorders in goats.
Multivariate statistical analyses To characterize the variations in the rumen fluid and serum metabolic profiles of the OTP and HTP conditions, PCA and PLS-DA were conducted (Fig. ). In the rumen fluid, PCA score plot revealed difference, which were separated in two conditions as THI changed (PC 1: 20.9% and PC 2: 7.9%) (Fig. A). In addition, the serum metabolites (PC 1: 25.6% and PC 2: 8.6%) were clearly separated in two periods as THI changed (Fig. A). These results showed that expression levels of rumen fluid and serum metabolites were obviously different under OTP and HTP conditions. In the PLS-DA score plots were clearly separated into two conditions as the THI changed from OTP to HTP, indicating changes in the metabolites of the two biological samples (rumen fluid, component 1: 20.6% and component 2: 6.8%; serum, component 1: 26.4% and component 2: 5.5%) (Fig. B). Notably, as shown in the PLS-DA score plot, we determined less variation in the serum metabolites obtained during HTP than those obtained during OTP (Fig. B). Identification of metabolites showing differential abundance between OTP and HTP conditions Differentially abundant metabolites in the rumen fluid and serum samples were investigated based on their comparative intensities under the OTP and HTP conditions. The metabolomic profiles from rumen fluid and serum were substantially clustered together under OTP and HTP conditions (Fig. and Tables S1-S2). In the rumen fluid, 2-oxoisocaproate, butyrate, valine, leucine, and propionate levels were significantly ( P < 0.05) higher in the OTP than in the HTP. In contrast, acetate, isopropanol, benzoate, urea, and dimethyl sulfone level were significantly ( P < 0.05) higher in the HTP than in the OTP (Fig. A and Table ). In the serum, lactate, trimethylamine N -oxide, acetate, glucose, and urea levels were significantly ( P < 0.05) higher in the OTP than in the HTP. In contrast, glucuronate, formate, 3-hydroxyphenylacetate, glycine, and tyrosine levels were significantly ( P < 0.05) higher in the HTP than in the OTP (Fig. B and Table S2). The ammonia nitrogen concentration was 7.28 mg/dL in the OTP and 6.90 mg/dL in the HTP, with no significant difference ( P > 0.05) between the two conditions (data not shown). As shown in Fig. , signature metabolites in rumen fluid and serum samples with VIP scores > 1.5 were identified and ranked using the PLS-DA model. In the rumen fluid, 2-oxoisocaproate (VIP score:2.66), valine (2.663), leucine (2.36), isobutyrate (2.33), and methylsuccinate (2.26) had the highest VIP scores in the OTP than in the HTP (Fig. A). In contrast, isopropanol (2.37), benzoate (1.79), and dimethyl sulfone (1.75) exhibited the highest VIP scores in the HTP than in the OTP (Fig. A). In the serum, all metabolites had higher VIP scores in the HTP than in the OTP, including galactarate (2.11), glucuronate (2.07), syringate (1.94), glycylproline (1.78), and isoeugenol (1.77) (Fig. B). The differential rumen fluid and serum metabolites identified in these analyses could candidate potential biomarkers indicating HS in goats. Metabolic pathway analysis of the metabolomes Metabolic pathways were identified using significantly different metabolites observed in rumen fluid and serum under OTP and HTP conditions (Fig. and Tables S3-S4). There were eight pathways with impact values higher than 0.1, which is the cut-off value for relevance associated with rumen fluid, including phenylalanine, tyrosine and tryptophan biosynthesis (impact value: 0.50); phenylalanine metabolism (0.36); alanine, aspartate, and glutamate metabolism (0.31); glycine, serine, and threonine metabolism (0.30); and cysteine and methionine metabolism (0.15) (Fig. A and Table S3). In the serum samples, 13 pathways with impact values higher than 0.1 were observed, including phenylalanine, tyrosine, and tryptophan biosynthesis (0.50); glutathione metabolism (0.34); glycine, serine, and threonine metabolism (0.34); pentose and glucuronate interconversions (0.30); and ascorbate and aldarate metabolism (0.25) (Fig. B and Table S4). Results of the enrichment and impact pathways of rumen fluid and serum identified several common metabolic pathways for numerous combinations of the two metabolomes. Five common metabolic pathways were identified: phenylalanine, tyrosine, and tryptophan biosynthesis; alanine, aspartate, and glutamate metabolism; and glycine, serine, and threonine metabolism. Composition of rumen prokaryotic communities In total, 1,907,001 sequences were obtained from 16S rRNA gene amplicon sequencing analysis of the two groups. After quality filtering using QIIME 2 (Q score > 25), 772,199 quality-controlled sequences were generated with an average of 38,610 ± 10,232 (mean ± SD) sequences per sample. The sequencing depth for the analysis of rumen microbiota was deemed sufficient, as Good’s coverage values were greater than 99.9% for all samples (data not shown). The alpha diversity measurements and Chao1 estimates ( P < 0.05) were significantly higher in the rumen fluid samples of the HTP, whereas evenness ( P < 0.05) was lower than that in the OTP. The Shannon and Simpson indices did not differ between the OTP and HTP conditions (Fig. A). PCoA plot constructed based on the Weighted UniFrac distance ( P < 0.05) and Unweighted UniFrac distance ( P < 0.001) revealed that the HTP rumen microbiota clustered separately from that of the OTP (Fig. B). Venn diagrams were used to compare the bacterial phyla, families, and genera (both classified and unclassified at the genus level) detected in the OTP and HTP conditions, revealing both shared and exclusively detected taxa (Fig. ). At the phylum level, 17 of the 18 detected phyla were shared between OTP and HTP conditions. The phylum of bacteria that were unclassified was exclusively found in the HTP. Of the 64 detected family levels, 56 were shared, whereas 2 and 6 were exclusively found in the OTP and HTP conditions, respectively. At the genus level, 103 of the 119 detected genera were shared between the OTP and HTP conditions, and only 7 and 9 genera were exclusively detected in the OTP and HTP conditions, respectively. At the phylum level, Bacteroidota (54.1% vs. 53.0%, FDR = 0.774), Firmicutes (30.6% vs. 27.9%, FDR = 0.506), Proteobacteria (4.33% vs. 7.50%, FDR = 0.396), and Verrucomicrobiota (2.56% vs. 3.28%, FDR = 0.396) were the enriched bacteria under the OTP and HTP conditions (Fig. C and Table S5). At the genus level, Prevotella (24.4% vs. 23.7%, FDR = 0.903), Rikenellaceae RC9 gut group (12.6% vs. 14.1%, FDR = 0.594), Quinella (6.29% vs. 4.92%, FDR = 0.645), and Bacteroidales RF16 group (5.15% vs. 3.17%, FDR = 0.587) were the enriched genera under the OTP and HTP conditions (Fig. C and Table S6). Identification of rumen microbiota showing differential abundance between OTP and HTP conditions We identified the differentially abundant microbial phyla and genera between the OTP and HTP conditions using LEfSe analysis (LDA > 2.0, P < 0.05) (Fig. ). At the phylum level, Desulfobacterota was enriched in the OTP, whereas Fibrobacterota was enriched in the HTP. At the genus level, 10 genera were enriched in the OTP: Papillibacter , Prevotellaceae NK3B31, Muribaculaceae, [Ruminococcus] gauvreauii, Lachnospiraceae ND3007, Lachnospiraceae NK3A20, Lachnospiraceae XPB1014, Desulfovibrio , Butyrivibrio , and F082. In contrast, four genera were enriched in the HTP: Fibrobacter , Anaeroplsma , Ruminococcus , and Oscillospiraceae UCG-002. Rumen microbial interactions identified using co-occurrence networks We used genera accounting for ≥ 0.1% average relative abundance in at least one of the OTP and HTP groups for the co-occurrence network analysis. It revealed 52 and 7 significant interactions under the OTP and HTP conditions, respectively, from among the 148 overall edges exclusively found in the rumen microbiota (Table S7 and Fig. S2). Based on two centrality measurements (authority and eigen centrality), Paillibacter and Prevotellaceae UCG-003 were the keystone genera under the OTP and HTP conditions, respectively. In the OTP, Papillibacter co-occurred with five genera: Bacteroidales BS11 gut group, Succiniclasticum , Anaerovorax , UCG-010, and Muribaculaceae; whereas they were mutually exclusive with five genera; Flexilinea , Oscillospira , Hungateiclostridiaceae UCG-012, Clostridia vadinBB60 group, and Methanobrevibacter . In the HTP, Prevotellaceae UCG-003 co-occurred with four genera; Quinella , Ruminococcus , and Hungateiclostridiaceae UCG-012; whereas negative interactions were observed with three genera; UG Lachnospiraceae, Oscillospiraceae NK4A214, and Anaerovibrio . Functional changes in rumen microbiota To predict the functional biomarkers of HS in goats, we performed PICURSt2 analysis. Seven different reference databases were used to identify the predicted functional features of rumen microbiota; however, no significant differences were observed between the OTP and HTP conditions (Table S8). We identified differentially abundant KEGG pathways and modules between the OTP and HTP conditions using LEfSe analysis (LDA > 2.0, P < 0.05) (Fig. ). A total of 11 major KEGG pathways were identified between the two conditions. Among these, nine pathways were enriched in OTP, whereas two pathways were enriched in HTP. The enriched metabolisms in the OTP were as follows: nitrotoluene degradation (ko00633), quorum sensing (ko02024), porphyrin (ko00860), nitrogen (ko00910), methane (ko00680), butanoate (ko00650), glycoxylate and dicarboxylate metabolism (ko00630), arginine and proline (ko00330) metabolisms, and microbial metabolism in diverse environment (ko01120) were enriched. In contrast, biotin (ko00780) and sulfur (ko00920) metabolisms were enriched in HTP. Furthermore, to identify the linkages between the KEGG pathways and rumen microbiota, we performed Spearman’s rank correlation analysis (Fig. S3). The relative abundance of Fibrobacter positively correlated with biotin metabolism (ko00780) and negatively correlated with Butyrivibrio . Notably, most genera were correlated with methane metabolism (ko00680), which was negatively correlated with Anaeroplasma and Oscillospiraceae UCG-002, and positively correlated with Lachnospiraceae, [Ruminococcus] gauvreauii, F082, Desulfovibrio , Butyrivbrio , Prevotellaceae NK3B31, and Methanobrevibacter . Regarding the metabolism of cofactors and vitamins, 4 out of 10 were enriched in OTP (cobalamin biosynthesis; M00122, M00924, and M00925; molybdenum cofactor biosynthesis; M00880), while 6 out of 10 were enriched in the HTP (heme biosynthesis; M00121, M00926, and M00868; NAD biosynthesis; M00912; pimeloyl-ACP biosynthesis; M00572; and coenzyme A biosynthesis; M00120). Regarding the energy metabolism, two out of three were enriched in the OTP (incomplete reductive citrate cycle; M00620 and reductive citrate cycle; M00173), whereas one out of three was enriched in the HTP (NADH: quinone oxidoreductase; M00144). Xenobiotics biodegradation (phenylacetate degradation; M00878), signature modules (beta-Lactam resistance; M00627), and carbohydrate (CHO) metabolism (citrate cycle; M00009) were only enriched in the OTP, while nucleotide (De novo purine biosynthesis; M00048) and amino acid (AA) (cysteine biosynthesis; M00021) metabolism, lipid (fatty acid biosynthesis; M00083) metabolisms, and biosynthesis of other secondary metabolites (aurachin biosynthesis; M00848) were only enriched in the HTP. Microbe-metabolites interactions associated with HS A total of 149 metabolites were identified in the rumen metabolomes, and subjected to Spearman’s rank correlation analysis to select HS associated metabolites. Based on the correlation analysis results, a total of 35 rumen metabolites were considered to be HS associated metabolites (| r | ≥ 0.5, P ≤ 0.5) and used for predicting HS using the RF model. For each metabolite, the mean decrease accuracy score was calculated to evaluate its contribution to the model’s predictive accuracy. This score represents the average reduction in classification accuracy when the given metabolite is excluded from the predictors, providing a direct measure of its importance. Based on metabolome results, four metabolites, butyrate, isopropanol, phenylacetate, and 2-oxoisocaproate, were selected using the RF model with a mean decrease accuracy > 3 (Fig. ). The constructed model had an AUC of 0.930 to 1, indicating a high level of accuracy in predicting HS. Multi-omic biplots depicting microbe-metabolite interactions under OTP and HTP conditions are shown in Fig. B,C, respectively. Additionally, heatmaps were generated to visualize the inferred conditional probabilities (> 1) of specific metabolites, revealing distinct interaction patterns between microbes and metabolites under the OTP and HTP conditions. Relationship between rumen metabolome and microbiome, and serum metabolome Spearman’s rank correlation analysis was performed to identify the linkages between rumen metabolites (FDR < 0.05) and microbiota (LDA > 2.0, P < 0.05) (Fig. A). A total of 35 metabolites were strongly correlated (| r | ≥ 0.5, P ≤ 0.5). The relative abundance of Fibrobacter was positively correlated with acetate concentration and negatively correlated with the relative abundance of Desulfovibrio , Lachnospiraceae XPB1014, Muribaculaceae, F082, Butyrivibrio , and Papillibacter . Isopropanol concentration was positively correlated with the relative abundances of Fibrobacter , Anaeroplasma , and Oscillospiraceae UCG-002. Butyrate concentration was positively correlated with the relative abundance of Lachnospiraceae ND3007, Lachnospiraceae NK3A20, Desulfovibrio , and Prevotellaceae NK3B31 and negatively correlated with the relative abundance of Ruminococcus . The concentration of phenylacetate was negatively correlated with the relative abundance of Oscillospiraceae UCG-002, Ruminococcus , and Fibrobacter and positively correlated with the relative abundance of [Ruminococcus] gauvreauii, Lachnospiraceae NK3A20, Desulfovibrio , Lachnospiraceae XPB1014, Lachnospiraceae ND3007, and Desulfovibrio . The results of Spearman’s rank correlation analysis between serum metabolites (FDR < 0.05) and rumen microbiota revealed that 35 metabolites had strong correlation (| r | ≥ 0.5, P ≤ 0.5) (Fig. B). Acetate concentration was negatively correlated with the relative abundance of Fibrobacter , Anaeroplasma , Oscillospiraceae UCG-002, and Ruminococcus and positively correlated with the relative abundance of [Ruminococcus] gauvreauii, Lachnospiraceae NK3A20, Desulfovibrio , Lachnospiraceae XPB1014, Lachnospiraceae ND3007, Muribaculaceae, and Butyrivibrio . The kynurenine concentration was negatively correlated with the relative abundance of Desulfovibrio and [Ruminococcus] gauvreauii.
To characterize the variations in the rumen fluid and serum metabolic profiles of the OTP and HTP conditions, PCA and PLS-DA were conducted (Fig. ). In the rumen fluid, PCA score plot revealed difference, which were separated in two conditions as THI changed (PC 1: 20.9% and PC 2: 7.9%) (Fig. A). In addition, the serum metabolites (PC 1: 25.6% and PC 2: 8.6%) were clearly separated in two periods as THI changed (Fig. A). These results showed that expression levels of rumen fluid and serum metabolites were obviously different under OTP and HTP conditions. In the PLS-DA score plots were clearly separated into two conditions as the THI changed from OTP to HTP, indicating changes in the metabolites of the two biological samples (rumen fluid, component 1: 20.6% and component 2: 6.8%; serum, component 1: 26.4% and component 2: 5.5%) (Fig. B). Notably, as shown in the PLS-DA score plot, we determined less variation in the serum metabolites obtained during HTP than those obtained during OTP (Fig. B).
Differentially abundant metabolites in the rumen fluid and serum samples were investigated based on their comparative intensities under the OTP and HTP conditions. The metabolomic profiles from rumen fluid and serum were substantially clustered together under OTP and HTP conditions (Fig. and Tables S1-S2). In the rumen fluid, 2-oxoisocaproate, butyrate, valine, leucine, and propionate levels were significantly ( P < 0.05) higher in the OTP than in the HTP. In contrast, acetate, isopropanol, benzoate, urea, and dimethyl sulfone level were significantly ( P < 0.05) higher in the HTP than in the OTP (Fig. A and Table ). In the serum, lactate, trimethylamine N -oxide, acetate, glucose, and urea levels were significantly ( P < 0.05) higher in the OTP than in the HTP. In contrast, glucuronate, formate, 3-hydroxyphenylacetate, glycine, and tyrosine levels were significantly ( P < 0.05) higher in the HTP than in the OTP (Fig. B and Table S2). The ammonia nitrogen concentration was 7.28 mg/dL in the OTP and 6.90 mg/dL in the HTP, with no significant difference ( P > 0.05) between the two conditions (data not shown). As shown in Fig. , signature metabolites in rumen fluid and serum samples with VIP scores > 1.5 were identified and ranked using the PLS-DA model. In the rumen fluid, 2-oxoisocaproate (VIP score:2.66), valine (2.663), leucine (2.36), isobutyrate (2.33), and methylsuccinate (2.26) had the highest VIP scores in the OTP than in the HTP (Fig. A). In contrast, isopropanol (2.37), benzoate (1.79), and dimethyl sulfone (1.75) exhibited the highest VIP scores in the HTP than in the OTP (Fig. A). In the serum, all metabolites had higher VIP scores in the HTP than in the OTP, including galactarate (2.11), glucuronate (2.07), syringate (1.94), glycylproline (1.78), and isoeugenol (1.77) (Fig. B). The differential rumen fluid and serum metabolites identified in these analyses could candidate potential biomarkers indicating HS in goats.
Metabolic pathways were identified using significantly different metabolites observed in rumen fluid and serum under OTP and HTP conditions (Fig. and Tables S3-S4). There were eight pathways with impact values higher than 0.1, which is the cut-off value for relevance associated with rumen fluid, including phenylalanine, tyrosine and tryptophan biosynthesis (impact value: 0.50); phenylalanine metabolism (0.36); alanine, aspartate, and glutamate metabolism (0.31); glycine, serine, and threonine metabolism (0.30); and cysteine and methionine metabolism (0.15) (Fig. A and Table S3). In the serum samples, 13 pathways with impact values higher than 0.1 were observed, including phenylalanine, tyrosine, and tryptophan biosynthesis (0.50); glutathione metabolism (0.34); glycine, serine, and threonine metabolism (0.34); pentose and glucuronate interconversions (0.30); and ascorbate and aldarate metabolism (0.25) (Fig. B and Table S4). Results of the enrichment and impact pathways of rumen fluid and serum identified several common metabolic pathways for numerous combinations of the two metabolomes. Five common metabolic pathways were identified: phenylalanine, tyrosine, and tryptophan biosynthesis; alanine, aspartate, and glutamate metabolism; and glycine, serine, and threonine metabolism.
In total, 1,907,001 sequences were obtained from 16S rRNA gene amplicon sequencing analysis of the two groups. After quality filtering using QIIME 2 (Q score > 25), 772,199 quality-controlled sequences were generated with an average of 38,610 ± 10,232 (mean ± SD) sequences per sample. The sequencing depth for the analysis of rumen microbiota was deemed sufficient, as Good’s coverage values were greater than 99.9% for all samples (data not shown). The alpha diversity measurements and Chao1 estimates ( P < 0.05) were significantly higher in the rumen fluid samples of the HTP, whereas evenness ( P < 0.05) was lower than that in the OTP. The Shannon and Simpson indices did not differ between the OTP and HTP conditions (Fig. A). PCoA plot constructed based on the Weighted UniFrac distance ( P < 0.05) and Unweighted UniFrac distance ( P < 0.001) revealed that the HTP rumen microbiota clustered separately from that of the OTP (Fig. B). Venn diagrams were used to compare the bacterial phyla, families, and genera (both classified and unclassified at the genus level) detected in the OTP and HTP conditions, revealing both shared and exclusively detected taxa (Fig. ). At the phylum level, 17 of the 18 detected phyla were shared between OTP and HTP conditions. The phylum of bacteria that were unclassified was exclusively found in the HTP. Of the 64 detected family levels, 56 were shared, whereas 2 and 6 were exclusively found in the OTP and HTP conditions, respectively. At the genus level, 103 of the 119 detected genera were shared between the OTP and HTP conditions, and only 7 and 9 genera were exclusively detected in the OTP and HTP conditions, respectively. At the phylum level, Bacteroidota (54.1% vs. 53.0%, FDR = 0.774), Firmicutes (30.6% vs. 27.9%, FDR = 0.506), Proteobacteria (4.33% vs. 7.50%, FDR = 0.396), and Verrucomicrobiota (2.56% vs. 3.28%, FDR = 0.396) were the enriched bacteria under the OTP and HTP conditions (Fig. C and Table S5). At the genus level, Prevotella (24.4% vs. 23.7%, FDR = 0.903), Rikenellaceae RC9 gut group (12.6% vs. 14.1%, FDR = 0.594), Quinella (6.29% vs. 4.92%, FDR = 0.645), and Bacteroidales RF16 group (5.15% vs. 3.17%, FDR = 0.587) were the enriched genera under the OTP and HTP conditions (Fig. C and Table S6).
We identified the differentially abundant microbial phyla and genera between the OTP and HTP conditions using LEfSe analysis (LDA > 2.0, P < 0.05) (Fig. ). At the phylum level, Desulfobacterota was enriched in the OTP, whereas Fibrobacterota was enriched in the HTP. At the genus level, 10 genera were enriched in the OTP: Papillibacter , Prevotellaceae NK3B31, Muribaculaceae, [Ruminococcus] gauvreauii, Lachnospiraceae ND3007, Lachnospiraceae NK3A20, Lachnospiraceae XPB1014, Desulfovibrio , Butyrivibrio , and F082. In contrast, four genera were enriched in the HTP: Fibrobacter , Anaeroplsma , Ruminococcus , and Oscillospiraceae UCG-002.
We used genera accounting for ≥ 0.1% average relative abundance in at least one of the OTP and HTP groups for the co-occurrence network analysis. It revealed 52 and 7 significant interactions under the OTP and HTP conditions, respectively, from among the 148 overall edges exclusively found in the rumen microbiota (Table S7 and Fig. S2). Based on two centrality measurements (authority and eigen centrality), Paillibacter and Prevotellaceae UCG-003 were the keystone genera under the OTP and HTP conditions, respectively. In the OTP, Papillibacter co-occurred with five genera: Bacteroidales BS11 gut group, Succiniclasticum , Anaerovorax , UCG-010, and Muribaculaceae; whereas they were mutually exclusive with five genera; Flexilinea , Oscillospira , Hungateiclostridiaceae UCG-012, Clostridia vadinBB60 group, and Methanobrevibacter . In the HTP, Prevotellaceae UCG-003 co-occurred with four genera; Quinella , Ruminococcus , and Hungateiclostridiaceae UCG-012; whereas negative interactions were observed with three genera; UG Lachnospiraceae, Oscillospiraceae NK4A214, and Anaerovibrio .
To predict the functional biomarkers of HS in goats, we performed PICURSt2 analysis. Seven different reference databases were used to identify the predicted functional features of rumen microbiota; however, no significant differences were observed between the OTP and HTP conditions (Table S8). We identified differentially abundant KEGG pathways and modules between the OTP and HTP conditions using LEfSe analysis (LDA > 2.0, P < 0.05) (Fig. ). A total of 11 major KEGG pathways were identified between the two conditions. Among these, nine pathways were enriched in OTP, whereas two pathways were enriched in HTP. The enriched metabolisms in the OTP were as follows: nitrotoluene degradation (ko00633), quorum sensing (ko02024), porphyrin (ko00860), nitrogen (ko00910), methane (ko00680), butanoate (ko00650), glycoxylate and dicarboxylate metabolism (ko00630), arginine and proline (ko00330) metabolisms, and microbial metabolism in diverse environment (ko01120) were enriched. In contrast, biotin (ko00780) and sulfur (ko00920) metabolisms were enriched in HTP. Furthermore, to identify the linkages between the KEGG pathways and rumen microbiota, we performed Spearman’s rank correlation analysis (Fig. S3). The relative abundance of Fibrobacter positively correlated with biotin metabolism (ko00780) and negatively correlated with Butyrivibrio . Notably, most genera were correlated with methane metabolism (ko00680), which was negatively correlated with Anaeroplasma and Oscillospiraceae UCG-002, and positively correlated with Lachnospiraceae, [Ruminococcus] gauvreauii, F082, Desulfovibrio , Butyrivbrio , Prevotellaceae NK3B31, and Methanobrevibacter . Regarding the metabolism of cofactors and vitamins, 4 out of 10 were enriched in OTP (cobalamin biosynthesis; M00122, M00924, and M00925; molybdenum cofactor biosynthesis; M00880), while 6 out of 10 were enriched in the HTP (heme biosynthesis; M00121, M00926, and M00868; NAD biosynthesis; M00912; pimeloyl-ACP biosynthesis; M00572; and coenzyme A biosynthesis; M00120). Regarding the energy metabolism, two out of three were enriched in the OTP (incomplete reductive citrate cycle; M00620 and reductive citrate cycle; M00173), whereas one out of three was enriched in the HTP (NADH: quinone oxidoreductase; M00144). Xenobiotics biodegradation (phenylacetate degradation; M00878), signature modules (beta-Lactam resistance; M00627), and carbohydrate (CHO) metabolism (citrate cycle; M00009) were only enriched in the OTP, while nucleotide (De novo purine biosynthesis; M00048) and amino acid (AA) (cysteine biosynthesis; M00021) metabolism, lipid (fatty acid biosynthesis; M00083) metabolisms, and biosynthesis of other secondary metabolites (aurachin biosynthesis; M00848) were only enriched in the HTP.
A total of 149 metabolites were identified in the rumen metabolomes, and subjected to Spearman’s rank correlation analysis to select HS associated metabolites. Based on the correlation analysis results, a total of 35 rumen metabolites were considered to be HS associated metabolites (| r | ≥ 0.5, P ≤ 0.5) and used for predicting HS using the RF model. For each metabolite, the mean decrease accuracy score was calculated to evaluate its contribution to the model’s predictive accuracy. This score represents the average reduction in classification accuracy when the given metabolite is excluded from the predictors, providing a direct measure of its importance. Based on metabolome results, four metabolites, butyrate, isopropanol, phenylacetate, and 2-oxoisocaproate, were selected using the RF model with a mean decrease accuracy > 3 (Fig. ). The constructed model had an AUC of 0.930 to 1, indicating a high level of accuracy in predicting HS. Multi-omic biplots depicting microbe-metabolite interactions under OTP and HTP conditions are shown in Fig. B,C, respectively. Additionally, heatmaps were generated to visualize the inferred conditional probabilities (> 1) of specific metabolites, revealing distinct interaction patterns between microbes and metabolites under the OTP and HTP conditions.
Spearman’s rank correlation analysis was performed to identify the linkages between rumen metabolites (FDR < 0.05) and microbiota (LDA > 2.0, P < 0.05) (Fig. A). A total of 35 metabolites were strongly correlated (| r | ≥ 0.5, P ≤ 0.5). The relative abundance of Fibrobacter was positively correlated with acetate concentration and negatively correlated with the relative abundance of Desulfovibrio , Lachnospiraceae XPB1014, Muribaculaceae, F082, Butyrivibrio , and Papillibacter . Isopropanol concentration was positively correlated with the relative abundances of Fibrobacter , Anaeroplasma , and Oscillospiraceae UCG-002. Butyrate concentration was positively correlated with the relative abundance of Lachnospiraceae ND3007, Lachnospiraceae NK3A20, Desulfovibrio , and Prevotellaceae NK3B31 and negatively correlated with the relative abundance of Ruminococcus . The concentration of phenylacetate was negatively correlated with the relative abundance of Oscillospiraceae UCG-002, Ruminococcus , and Fibrobacter and positively correlated with the relative abundance of [Ruminococcus] gauvreauii, Lachnospiraceae NK3A20, Desulfovibrio , Lachnospiraceae XPB1014, Lachnospiraceae ND3007, and Desulfovibrio . The results of Spearman’s rank correlation analysis between serum metabolites (FDR < 0.05) and rumen microbiota revealed that 35 metabolites had strong correlation (| r | ≥ 0.5, P ≤ 0.5) (Fig. B). Acetate concentration was negatively correlated with the relative abundance of Fibrobacter , Anaeroplasma , Oscillospiraceae UCG-002, and Ruminococcus and positively correlated with the relative abundance of [Ruminococcus] gauvreauii, Lachnospiraceae NK3A20, Desulfovibrio , Lachnospiraceae XPB1014, Lachnospiraceae ND3007, Muribaculaceae, and Butyrivibrio . The kynurenine concentration was negatively correlated with the relative abundance of Desulfovibrio and [Ruminococcus] gauvreauii.
We performed a comprehensive analysis of the rumen metataxonomic, along with rumen and serum metabolomics, to gain insight regarding the complex interactions between the rumen microbiome and host metabolome, which plays a critical role in regulating physiological responses in HS goats. Using this integrated approach, we identified and quantified the relative effects of rumen microbial composition, function, and metabolites on the observed variations due to HS. Amino acids metabolites, which are essential for rumen function and animal performance , are principally obtained from protein and microprotein degradation by the rumen microbiota . In the present study, AA metabolites were found to be significantly affected by HS. AA metabolites (leucine, glycine, methionine, creatine) ( P < 0.05) and 2-oxoisocaproate ( P < 0.0001, VIP score: 2.66) concentrations were significantly higher in OTP than in HTP. These AA are known to enhance metabolism, antioxidant status, and immunity – , while 2-oxoisocaproate, a metabolic intermediate of leucine , is related with amino-acetyl-tRNA biosynthesis . The significant difference in amino-acetyl-tRNA biosynthesis ( P < 0.0001) between conditions further suggests that HS substantially affects ruminal AA metabolism. Under HS, decreased feed intake leads to lower blood glucose concentrations, potentially affecting energy metabolism and liver function – . When glucose supply is insufficient, increased lipolysis leads to higher production of ketone bodies including acetone , . In the rumen, acetone is converted to isopropanol , , suggesting that isopropanol could serve as an indirect biomarker of HS. In this study, isopropanol concentration was significantly higher ( P < 0.0001 and VIP score: 2.37) in HTP than in OTP, while serum glucose was higher ( P < 0.05) in OTP. Additionally, glycolysis and gluconeogenesis showed significant differences ( P < 0.01) between conditions. Due to nutrient scarcity under HS, AA metabolism increases in the liver, resulting in elevated blood AA concentrations , . Consistent with previous studies in HS cows and steers that showed increased levels of glucogenic amino acids – , we found significantly higher concentrations of glutamine, methionine, tyrosine ( P < 0.05), and glycine ( P < 0.01) in HTP. Several AA metabolic pathways, particularly phenylalanine, tyrosine and tryptophan biosynthesis ( P < 0.01, impact value: 0.5), were significantly altered. The acetate concentration was significantly higher ( P < 0.05) in OTP than in HTP, which is notable as acetate has been associated with anti-inflammatory effects and serves as an important lipogenic substrate in ruminants . Kynurenine which has immunomodulatory properties was significantly higher ( P < 0.01) in HTP than in OTP. As kynurenine is related with inflammatory responses and energy balance , , its increased concentration could be related to HS. In our study, kynurenine showed a negative correlation with the abundance of Desulfovibrio and [Ruminococcus] gauvreauii group in OTP, and tryptophan metabolism was significantly different ( P < 0.001, impact value: 0.15) between conditions. Similarly, betaine and glutathione, which have anti-inflammatory properties , showed significantly higher concentrations ( P < 0.01) in HTP than in OTP, consistent with previous findings in HS cows . Glutathione metabolism also showed significant differences ( P < 0.01, impact value: 0.34) between conditions. Given that HS adversely affects immunological functions in ruminants , these metabolites such as kynurenine, betaine, and glutathione could serve as potential serum biomarkers for HS. Regarding rumen microbiota, we found differences in the Chao1 estimate and evenness between the OTP and HTP conditions, indicating that HS affected ruminal microbial composition. This is consistent with the results of a previous study on Holstein cows, except for evenness . However, our results contradicted those of who reported that alpha diversity was not affected by HS in chamber settings . The differences in the effects of HS on ruminal microbial composition among different studies may be due to several factors, including exposure duration or intensity , animal breed , age , and gender . Additionally, changes in rumen and serum metabolomes under HS conditions, such as alterations in amino acid metabolism and increased stress-related metabolites, provided important insights into the physiological responses of goats to HS. Biotin metabolism, which is essential nutrient for both rumen microbes and the host and a cofactor for various enzymes required for AA, CHO, and fatty acids metabolisms . Most rumen fibrolytic bacteria require biotin for growth , to improve rumen fiber fermentation . This is consistent with our finding that Fibrobacter is positively correlated with biotin metabolism. Moreover, serum biotin concentration was positively correlated with Oscillospiraceae UCG-002, which was more abundant in the HTP. This genus is associated with the activation of energy metabolism such as glycolysis in the rumen . Cows with high milk yield have abundant biotin metabolism, suggesting a critical role of biotin in milk production . Although our study did not focus on host production, it is possible that the role of biotin in energy metabolism could be particularly important during HS, which causes the animals to lend more energy for thermoregulation and other stress-related responses. Here, we identified certain co-occurring and mutually exclusive relationships between genera exclusively detected in the OTP and HTP conditions. Notably, ruminal microbiota of the OTP exhibited significantly higher levels of co-occurrence or mutual exclusion relationships compared to the HTP, with a six-fold difference (52 vs. 9). In HTP, Prevotellaceae UCG-003 was mutually exclusive with butyrate-producing bacteria such as UG Lachnospiraceae and Anaerovibrio , whereas it co-occurred with acetate-producing bacteria, such as Quinella and Ruminococcus . The findings of our study differ from those of who reported mutual exclusion between Prevotella and Ruminococcus . However, it is important to note that the experimental conditions of these two studies differed. In contrast to our study, Zhong et al. did not consider the effects of thermal conditions on co-occurrence relationships, which may have influenced the observed patterns of bacterial interactions. However, another study that considered the effects of thermal conditions on the co-occurrence relationships showed that OTP in dairy cows have a mutually exclusive relationship with Prevotella and Ruminococcus . These conflicts among prior reports and our findings may be explained by the fact that ruminal microbiota may differ among animal species under HS. Nevertheless, our results identified the co-occurrence and mutual exclusion relationships exclusively detected for OTP and HTP conditions in the ruminal microbiota of goats. These findings suggested that HS can alter the response of these genera of microbiota in goats, highlighting the importance of considering various animal species in studies regarding the effects of HS on the ruminal microbiota. In the present study, we found that HS resulted in the differential enrichment of acetate and butyrate in the rumen, suggesting a shift in the metabolic pathways of ruminal microbes. HS-induced reduction in feed intake has been reported to cause changes in VFAs production and energy requirements , with changes in rumen microbial abundance being a main factor affecting rumen fermentation characteristics . While previous studies reported decreased acetate concentrations during HS in ruminants , , we found significantly higher acetate concentration ( P < 0.0001) in HTP than in OTP. This aligns with findings in HS buffaloes, where acetate increased due to microbial adaptation to HS , and in certain goat breeds ( Osmanabadi and Malabari ) where increased acetate concentration was attributed to differences in rumen microbe abundance and feed digestibility , . Our results suggest that acetate increased in HTP goats to compensate for HS-induced energy deficiency. In the HTP, acetate-producing bacteria, particularly Fibrobacter and Ruminococcus , were enriched. Fibrobacter , which breaks down plant-based cellulose to produce acetate , was highly enriched in the rumen microbial community of HTP despite higher Chao1 estimates and lower evenness. This enrichment was supported by its positive correlation with ruminal acetate concentration. While this increase in Fibrobacter contradicted previous findings in HS goats , but aligned with observations in HS dairy cows . In our study, it is possible that the Fibrobacter had stronger heat resistance than the other ruminal bacteria as suggested by Kim et al. . Previous studies have reported increased butyrate concentrations in ruminants exposed to HS. However, in the present study, butyrate concentrations were significantly lower ( P < 0.0001) in the HTP than in the OTP. Pragna et al. (2018) similarly reported that the concentration of butyrate in the three goat breeds significantly decreased under HS. Butyrate is important factor in the postnatal development of ruminal epithelium , which is responsible for many important physiological functions including absorption, transport, and short-chain fatty acid metabolism . While the genus Butyrivibrio was enriched in OTP without correlation to butyrate concentration, it showed the highest probability of co-occurrence with butyrate in HTP. Although the HTP had lower butyrate concentrations, the role of Butyrivibrio in butyrate production may be more important under HTP. Regarding phenylacetate metabolism, which promotes cellulose degradation and growth of rumen microbiota including Ruminococcus sp – , we found significantly higher phenylacetate concentrations ( P < 0.001) in OTP than in HTP. Phenylacetate showed a positive correlation with the [Ruminococcus] gauvreauii group in OTP, and the concentration of 3-phenylpropionate also tended to be higher (0.05 ≤ P ≤ 0.01) in OTP. The observed co-occurrence probability between microbiota and phenylacetate in HTP suggests that HS affects these metabolic relationships. Further research is needed to understand the underlying mechanisms of these HS-induced changes in the ruminal microbiome. Most genera that showed significantly higher abundance in the OTP belonged to the Lachnospiraceae family, indicating that two of its main genera, Butyrivibrio and Lachnospiraceae, are present in the rumen and are involved in butyrate and acetate synthesis . In the present study, Lachnospiraceae ND3007 group and Butyrivibrio were enriched > 2-fold in the OTP, and the Lachnospiraceae NK3A20 group showed > 5-fold enrichment. These genera were positively correlated with butyrate concentrations in the rumen fluid, suggesting that they genera play an essential role in butyrate biosynthesis. Additionally, Papillibacter is involved in butyrate production and was enriched in OTP, even though no correlation was observed with butyrate concentration in the present study. Desulfovibrio , which serves as a butyrate-oxidizing bacterium in the rumen, was also positively correlated with butyrate concentration and was enriched in the OTP. The enrichment of butyrate-producing or butyrate-oxidizing bacteria in under the OTP suggested that they may have played a role in the increased butanoate metabolism observed during this period. Additionally, the higher butyrate concentration observed under the OTP may be associated with the enrichment of the butanoate metabolism pathway. Therefore, it is possible that the observed enrichment of these bacteria under the OTP contributed to the overall increase in butyrate production, potentially through the butanoate metabolism pathway.
In conclusion, our study investigated the changes in metabolic and rumen microbial populations in Korean native goats under HS. We observed differential expression of metabolites under OTP and HTP conditions. We found that several metabolites (butyrate, isopropanol, phenylacetate, and 2-oxoisocaproate in the rumen fluid and acetate, betaine, glucuronate, and kynurenine in the serum) were significantly differently altered between the two periods, and hence could potentially be used as HS biomarkers in goats. Furthermore, our analysis of rumen and serum metabolomes highlight the importance of considering these factors to comprehensively understand the effects of HS on rumen microbial composition. Specifically, we observed that the main acetate-producing bacteria in the rumen, such as Fibrobacter and Ruminococcus , were enriched in the HTP, whereas butyrate-producing bacteria such as the families Lachnospiraceae, Butyrivibrio , and Papillibacter were enriched in the OTP. The observed enrichment was consistent with the concentrations of the rumen metabolites. To the best our knowledge, this is the first study to use multi-omics tools for investigating the physiological responses of goats to HS, thus providing novel into changes in microbial diversity. By identifying changes in the concentrations of acetate and butyrate in the rumen, our study provided evidence regarding the physiological responses of goats to HS. These findings contribute to a more comprehensive understanding of the effects of HS on animal health, productivity and may have implications for developing strategies for mitigating the adverse effects of HS in goats.
Animal ethic statement All methods and experimental protocols were carried out in accordance with the guidelines and regulations of the Gyeongsang National University Institutional Animal Care and Use Committee (GNU-IACUC) and approved under protocol number GNU-210,705-E0063. The experiments were carried out at the Gyeongsang National University Animal Breeding Farm from May 1 to August 11 in Jinju, Gyeongsangnamdo, Republic of Korea (35.12.35° N,128.08.21° E). All animal studies followed the ARRIVE guidelines ( https://arriveguidelines.org ). The goats used in this study were not sacrificed and were returned to their normal housing and management conditions at the conclusion of the experiment. Experimental design, animals and diet A total of 10 Korean native goats [ Capra hircus coreanae , 41.08 ± 1.83 kg (mean ± standard deviation), male] were used in the study. Goats were fed a diet composed of tall fescue hay and a commercial concentrate in a 50:50 ratio to meet nutrient requirements, as per National Research Council recommendations . Diet was provided twice daily in two equal meals at 0800 h and 1600 h. To encourage forage consumption before concentrate intake, tall fescue hay was offered first, followed by the concentrate mix. Although feed intake was not specifically measured, all animals were observed during feeding to ensure consistent access to their assigned diet. The study was conducted over two distinct 10-day periods, corresponding to different temperature-humidity index (THI) conditions: optimal temperature period (OTP, May 1–10) and high temperature period (HTP, August 2–11). This resulted in a total study duration of 20 days, with the goats exposed to each period once. The THI was calculated using temperature and humidity data collected every hour with a temperature and humidity meter (Testo 174 H Mini data logger; West Chester, PA, USA). The THI equation used was as follows . [12pt]{minimal}
$$}&=({}.{} {}+{}) - [(0.{}-0.00{} {}) \\& ({}.{} {} - \,{})].$$ The average THI values for OTP and HTP were 57.13 ± 3.98 and 80.27 ± 1.22, respectively. Rumen fluid and serum samples were collected at the end of each period for comparative analysis to assess the effects of HS on goats. Feed sampling and analyses Dried feed samples (tall fescue and concentrate) were ground through a 1 mm sieve using a Wiley Mill (Arthur Thomas CO., Philadelphia, PA) and submitted to Cumberland Valley Analytical Services Inc. (Waynesboro, PA) to be analyzed by wet chemistry methods for dry matter (DM, AOAC International, 2000; method 930.15), crude protein (CP, AOAC International, 2000; method 990.03), ether extract (EE, AOAC International, 2006; method 2003.05), ash (AOAC International, 2000; method 942.05), minerals (AOAC international, 2000; method 985.01), amylase-treated neutral detergent fiber [aNDF ), , acid detergent fiber (ADF, AOAC international, 2000; method 973.18), neutral detergent insoluble crude protein (NDICP, Leco FP-528 N Combustion Analyzer), acid detergent insoluble crude protein (ADICP, Leco FP-528 N Combustion Analyzer), lignin, and starch. Non-fiber carbohydrates (NFC) were calculated according to the equation; NFC = 100 – [(CP – NDICP) + EE + ash + NDF]. Net energy for maintenance was calculated using the OARDC Summative Energy Equation. The chemical composition of the tall fescue and commercial concentrate are presented in Table . Rumen fluid and blood sampling analyses Rumen fluid contents were collected with oral stomach tubing (length of 150 cm and a diameter of 0.8 cm) from each animal at before morning feeding. Briefly, samples of rumen fluid contents were collected by inserting an oral stomach tube to a depth of about 20 cm, so that the probe head could reach the central rumen fluid. To minimize contamination from the saliva, the first 20 mL of each rumen fluid sample was discarded. ( n = 10) . The samples were centrifuged at 806 × g at 4 ℃ for 15 min to remove feed particles, and the supernatant was stored at − 80 ℃ for 1 H-NMR spectroscopy analysis and filtered rumen fluid (5 mL) was centrifuged at 20,000 × g at 4 ℃ for 15 min, the supernatant was discarded, and the pellet was stored at − 80 ℃ for microbial analysis, respectively. On d 20 of each sampling period before the morning feeding, blood from the jugular neck vein was collected in a serum-separating tube (BD Vacutainer, SST™ II advance, Becton Dickinson Co., Franklin Lakes, NJ, USA) from goats. The blood samples were centrifuged at 1006 × g at 4 ℃ for 15 min, and the serum was stored at -80 ℃ until 1 H-NMR spectroscopy analysis. Samples preparation and1H-NMR spectroscopy analysis The rumen fluid sample was centrifuged at 12,902 × g at 4 ℃ for 10 min, and 300 µL of the supernatant was collected. A standard buffer solution (2,2,3,3-d(4)-3-(trimethylsilyl) propionic acid [TSP] sodium salt) was added to 300 µL of the supernatant in deuterium oxide (D 2 O) solvent/standard buffer solution (300 µL). The supernatants (600 µL) were transferred to 5 mm NMR tubes for 1 H-NMR spectroscopy anaslysis , . We prepared saline buffer (concentration of 0.9% wt/vol) by applying NaCl in 100% D 2 O solvent. The stored serum samples were centrifuged at 14,000 × g at 4 ℃ for 10 min. The supernatant (200 µL) was added to 400 µL of saline buffer in 5 mm NMR tube for 1 H-NMR spectroscopy analysis , . The spectra of rumen fluid and serum were obtained son a SPE-800 MHz NMR-MS Spectrometer (Bruker BioSpin AG, Fällanden, Switzerland) equipped with a 5 mm triple-resonance inverse cryoprobe with Z-gradients (Bruker BioSpin Co., Billerica, Massachusetts, USA). The pulse sequence used for the rumen fluid and serum were a Carr-Purcell-Meiboom-Gill pulse sequence. We collected 64,000 data points with 128 transients, and spectral width of 16,025.641 Hz, a relaxation delay of 4.0 s, and an acquisition time of 2.0 s . Metabolites identification, quantification and statistical analyses Metabolite identification and quantification were carried out by importing the analyzed spectral data into the Chenomx NMR suite 8.4 software (ChenomxInc, Edmonton, Alberta, Canada). The baseline and phase were matched for comparison between samples using the Chenomx processor. The spectral width was δ 0.2 to 10.0 mg/kg and was referenced to the TSP signal at 0.0 mg/kg. Qualitative and quantitative metabolite analyses were performed using the Livestock Metabolome Database ( http://www.lmdb.ca ), Bovine Metabolome Database ( http://www.bmdb.ca ), and the Chenomx profiler. Statistical analyses of the metabolite data were performed using MetaboAnalyst 5.0 ( http://www.metaboanalyst.ca ). To perform a standard cross-sectional two-periods study, we compared the periods of OTP and HTP conditions. The resulting data were processed by normalization-selected methods, followed by sample normalization via normalization to constant sum, data transformation via log normalization, and data scaling via pareto scaling . The rumen fluid and serum metabolite data with 50% of samples under the identification limit or with at least 50% of values missing were eliminated from the analysis. The metabolites that were missing positive in the original data. The univariate Student’s t -test was used to quantify differences between in the metabolite profiles of the rumen fluid and serum under the OTP and HTP conditions. P values were corrected for false discovery rate (FDR) and P < 0.05 and 0.05 ≤ P ≤ 0.01 were considered as significant and tendency effects, respectively. Principal components analysis (PCA) and partial least square-discriminant analysis (PLS-DA) were used as multivariate data analysis techniques to generate a classification model and provide quantitative information for discriminating rumen fluid and serum metabolites. The different groups of rumen fluid and serum metabolites from OTP and HTP conditions were determined based on a statistically significant threshold of variable importance in projection (VIP) scores. Metabolites with VIP scores higher than 1.5 were obtained using the PLS-DA model. Metabolic pathway analysis was performed using the Bos taurus pathway library [Kyoto Encyclopedia of Genes and Genomes (KEGG), http://www.kegg.com ]. Significantly different metabolic pathways in the rumen fluid and serum metabolites of the other study animals were statistically analyzed and determined using MetaboAnalyst 5.0, which is based on the database source by KEGG. Metataxonomic sequencing and data processing Total DNA of rumen fluid (1.8 mL) was extracted using the RBB + C method . The quality and quantity of extracted DNA were evaluated using a NanoDrop ND-2000 spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). To amplify the V3-V4 region from the 16S rRNA gene (primer set: forward: 5′-CCTACGGGNGGCWGCAG-3′, reverse, 5′- GACTACHVGGGTATCTAATCC-3′) were used. 16S rRNA gene amplicon was sequenced using Illumina MiSeq platform (San Diego, CA, USA) at Macrogen, Inc. (Seoul, Korea). Fastq files obtained from MiSeq paired-end sequencing data were analyzed using QIIME2 (version 2021.11) . Briefly, after demultiplexing the sequences, the barcode and primer sequences were removed using Cutadapt . Then, the DADA2 plugin was used to denoise the forward and reverse reads with quality filtering (Q-score > 25) and merged, which was followed by chimera removal . Taxonomic classification was performed using the naïve Bayes taxonomy classifier manually-trained on Silva (SSU138) 16S rRNA gene database (clustered at 99% similarity; 341 F/805R region) . Unassigned, mitochondria, and chloroplast were excluded before downstream analysis. To reduce the sampling heterogeneity, the ASV table was rarefied to the same reads per sample (20,080 ASVs) with 1,000 times using the ‘q2-repeat-rarefy’ plugin from QIIME2 . Microbial diversity was evaluated within samples (alpha diversity) or between samples (beta diversity) on rarefied ASVs table. Alpha diversity was evaluated using richness (Chao1 estimate), Evenness, Simpson’s index, and Shannon’s index. Beta diversity was evaluated using phylogenetic distance of Weighted UniFrac and Unweighted UniFrac. Prediction of metabolic functions (KEGG modules and pathways) from the rumen microbiota was performed using the PICRUSt2 tool (v.2.4.1) . Bioinformatics and statistical analysis Spearman’s rank correlation coefficient (| r | ≥ 0.5, P ≤ 0.5) between microbes at the genus level and metabolites in host serum and rumen fluid, were identified using PROC CORR procedure SAS 9.4. The correlation heatmap was generated using the R software “pheatmap” package. To understand relationship among the taxonomic groups of the major genera (relative abundance ≥ 0.1%) in the OTP and HTP conditions, the co-occurrence network analysis was generated using the ‘FastSpar’ which use SparCC algorithm . Comparison of each exclusive networks were accomplished by use of Co-expression Differential Network Analysis (CoDiNA) . To define network statistics, we used the built-in plugins in Gephi (v. 0.9.2) to calculate measurements of centrality (i.e., eigenvector centrality and authority). The “randomForest (RF)” package in R was used for the RF analysis . The rumen metabolites were used as inputs in the RF model. For each metabolite, a mean decrease accuracy score was calculated based on the increase in error caused by removing that metabolite from the predictors. This score reflects the importance of metabolites in the model. The best predictive model was identified based on the maximum area under the curve (AUC), using the “pROC package” in R . To minimize potential overfitting, we applied a 10-fold cross-validation approach using the “trainControl” package in R . To predict the probabilities of co-occurrence between microbial genera and metabolites in host rumen fluid, we employed microbe-metabolite vectors (mmvec’s) neural network-based approach, which infers the nature of interactions across omics datasets . The interactions between microbes and metabolites were ranked and visualized through the standard dimensionality reduction interface that is implemented as a plugin in QIIME2 (Version 2021.2) . Data normality was analyzed using Shapiro-Wilk test in SAS 9.4 (SAS Institute Inc., NC, USA). Normally distributed data were further analyzed using t -test. For abnormally distributed data, a non-parametric Wilcoxon rank-sum test, and P values were corrected for a false discovery rate using the Benjamini-Hochberg method, with FDR-corrected P < 0.05 being considered significant. The resulting distance matrices served as inputs for principal coordinates analysis (PCoA) and significance of sample clustering was analyzed by permutational multivariate analysis of variance (PERMANOVA) with 9,999 permutations. The differential relative abundances of the rumen microbiota and its predicted metabolic categories were analyzed via linear discriminant analysis effect size (LEfSe) analysis using Galaxy web application . The normalized ASV counts in each sample were used as the input for the LEfSe analysis. Linear discriminant analysis uses a nonparametric factorial Kruskal Wallis and Wilcoxon rank sum test followed by a linear discriminate analysis to estimate the effect size of each taxon. A significance level of P < 0.05 and effect size threshold of 2 were applied in the trial to identify the biomarker taxa. Statistical significance was set to P < 0.05, and a tendency of difference was declared at 0.05 ≤ P ≤ 0.10.
All methods and experimental protocols were carried out in accordance with the guidelines and regulations of the Gyeongsang National University Institutional Animal Care and Use Committee (GNU-IACUC) and approved under protocol number GNU-210,705-E0063. The experiments were carried out at the Gyeongsang National University Animal Breeding Farm from May 1 to August 11 in Jinju, Gyeongsangnamdo, Republic of Korea (35.12.35° N,128.08.21° E). All animal studies followed the ARRIVE guidelines ( https://arriveguidelines.org ). The goats used in this study were not sacrificed and were returned to their normal housing and management conditions at the conclusion of the experiment.
A total of 10 Korean native goats [ Capra hircus coreanae , 41.08 ± 1.83 kg (mean ± standard deviation), male] were used in the study. Goats were fed a diet composed of tall fescue hay and a commercial concentrate in a 50:50 ratio to meet nutrient requirements, as per National Research Council recommendations . Diet was provided twice daily in two equal meals at 0800 h and 1600 h. To encourage forage consumption before concentrate intake, tall fescue hay was offered first, followed by the concentrate mix. Although feed intake was not specifically measured, all animals were observed during feeding to ensure consistent access to their assigned diet. The study was conducted over two distinct 10-day periods, corresponding to different temperature-humidity index (THI) conditions: optimal temperature period (OTP, May 1–10) and high temperature period (HTP, August 2–11). This resulted in a total study duration of 20 days, with the goats exposed to each period once. The THI was calculated using temperature and humidity data collected every hour with a temperature and humidity meter (Testo 174 H Mini data logger; West Chester, PA, USA). The THI equation used was as follows . [12pt]{minimal}
$$}&=({}.{} {}+{}) - [(0.{}-0.00{} {}) \\& ({}.{} {} - \,{})].$$ The average THI values for OTP and HTP were 57.13 ± 3.98 and 80.27 ± 1.22, respectively. Rumen fluid and serum samples were collected at the end of each period for comparative analysis to assess the effects of HS on goats.
Dried feed samples (tall fescue and concentrate) were ground through a 1 mm sieve using a Wiley Mill (Arthur Thomas CO., Philadelphia, PA) and submitted to Cumberland Valley Analytical Services Inc. (Waynesboro, PA) to be analyzed by wet chemistry methods for dry matter (DM, AOAC International, 2000; method 930.15), crude protein (CP, AOAC International, 2000; method 990.03), ether extract (EE, AOAC International, 2006; method 2003.05), ash (AOAC International, 2000; method 942.05), minerals (AOAC international, 2000; method 985.01), amylase-treated neutral detergent fiber [aNDF ), , acid detergent fiber (ADF, AOAC international, 2000; method 973.18), neutral detergent insoluble crude protein (NDICP, Leco FP-528 N Combustion Analyzer), acid detergent insoluble crude protein (ADICP, Leco FP-528 N Combustion Analyzer), lignin, and starch. Non-fiber carbohydrates (NFC) were calculated according to the equation; NFC = 100 – [(CP – NDICP) + EE + ash + NDF]. Net energy for maintenance was calculated using the OARDC Summative Energy Equation. The chemical composition of the tall fescue and commercial concentrate are presented in Table .
Rumen fluid contents were collected with oral stomach tubing (length of 150 cm and a diameter of 0.8 cm) from each animal at before morning feeding. Briefly, samples of rumen fluid contents were collected by inserting an oral stomach tube to a depth of about 20 cm, so that the probe head could reach the central rumen fluid. To minimize contamination from the saliva, the first 20 mL of each rumen fluid sample was discarded. ( n = 10) . The samples were centrifuged at 806 × g at 4 ℃ for 15 min to remove feed particles, and the supernatant was stored at − 80 ℃ for 1 H-NMR spectroscopy analysis and filtered rumen fluid (5 mL) was centrifuged at 20,000 × g at 4 ℃ for 15 min, the supernatant was discarded, and the pellet was stored at − 80 ℃ for microbial analysis, respectively. On d 20 of each sampling period before the morning feeding, blood from the jugular neck vein was collected in a serum-separating tube (BD Vacutainer, SST™ II advance, Becton Dickinson Co., Franklin Lakes, NJ, USA) from goats. The blood samples were centrifuged at 1006 × g at 4 ℃ for 15 min, and the serum was stored at -80 ℃ until 1 H-NMR spectroscopy analysis.
The rumen fluid sample was centrifuged at 12,902 × g at 4 ℃ for 10 min, and 300 µL of the supernatant was collected. A standard buffer solution (2,2,3,3-d(4)-3-(trimethylsilyl) propionic acid [TSP] sodium salt) was added to 300 µL of the supernatant in deuterium oxide (D 2 O) solvent/standard buffer solution (300 µL). The supernatants (600 µL) were transferred to 5 mm NMR tubes for 1 H-NMR spectroscopy anaslysis , . We prepared saline buffer (concentration of 0.9% wt/vol) by applying NaCl in 100% D 2 O solvent. The stored serum samples were centrifuged at 14,000 × g at 4 ℃ for 10 min. The supernatant (200 µL) was added to 400 µL of saline buffer in 5 mm NMR tube for 1 H-NMR spectroscopy analysis , . The spectra of rumen fluid and serum were obtained son a SPE-800 MHz NMR-MS Spectrometer (Bruker BioSpin AG, Fällanden, Switzerland) equipped with a 5 mm triple-resonance inverse cryoprobe with Z-gradients (Bruker BioSpin Co., Billerica, Massachusetts, USA). The pulse sequence used for the rumen fluid and serum were a Carr-Purcell-Meiboom-Gill pulse sequence. We collected 64,000 data points with 128 transients, and spectral width of 16,025.641 Hz, a relaxation delay of 4.0 s, and an acquisition time of 2.0 s .
Metabolite identification and quantification were carried out by importing the analyzed spectral data into the Chenomx NMR suite 8.4 software (ChenomxInc, Edmonton, Alberta, Canada). The baseline and phase were matched for comparison between samples using the Chenomx processor. The spectral width was δ 0.2 to 10.0 mg/kg and was referenced to the TSP signal at 0.0 mg/kg. Qualitative and quantitative metabolite analyses were performed using the Livestock Metabolome Database ( http://www.lmdb.ca ), Bovine Metabolome Database ( http://www.bmdb.ca ), and the Chenomx profiler. Statistical analyses of the metabolite data were performed using MetaboAnalyst 5.0 ( http://www.metaboanalyst.ca ). To perform a standard cross-sectional two-periods study, we compared the periods of OTP and HTP conditions. The resulting data were processed by normalization-selected methods, followed by sample normalization via normalization to constant sum, data transformation via log normalization, and data scaling via pareto scaling . The rumen fluid and serum metabolite data with 50% of samples under the identification limit or with at least 50% of values missing were eliminated from the analysis. The metabolites that were missing positive in the original data. The univariate Student’s t -test was used to quantify differences between in the metabolite profiles of the rumen fluid and serum under the OTP and HTP conditions. P values were corrected for false discovery rate (FDR) and P < 0.05 and 0.05 ≤ P ≤ 0.01 were considered as significant and tendency effects, respectively. Principal components analysis (PCA) and partial least square-discriminant analysis (PLS-DA) were used as multivariate data analysis techniques to generate a classification model and provide quantitative information for discriminating rumen fluid and serum metabolites. The different groups of rumen fluid and serum metabolites from OTP and HTP conditions were determined based on a statistically significant threshold of variable importance in projection (VIP) scores. Metabolites with VIP scores higher than 1.5 were obtained using the PLS-DA model. Metabolic pathway analysis was performed using the Bos taurus pathway library [Kyoto Encyclopedia of Genes and Genomes (KEGG), http://www.kegg.com ]. Significantly different metabolic pathways in the rumen fluid and serum metabolites of the other study animals were statistically analyzed and determined using MetaboAnalyst 5.0, which is based on the database source by KEGG.
Total DNA of rumen fluid (1.8 mL) was extracted using the RBB + C method . The quality and quantity of extracted DNA were evaluated using a NanoDrop ND-2000 spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). To amplify the V3-V4 region from the 16S rRNA gene (primer set: forward: 5′-CCTACGGGNGGCWGCAG-3′, reverse, 5′- GACTACHVGGGTATCTAATCC-3′) were used. 16S rRNA gene amplicon was sequenced using Illumina MiSeq platform (San Diego, CA, USA) at Macrogen, Inc. (Seoul, Korea). Fastq files obtained from MiSeq paired-end sequencing data were analyzed using QIIME2 (version 2021.11) . Briefly, after demultiplexing the sequences, the barcode and primer sequences were removed using Cutadapt . Then, the DADA2 plugin was used to denoise the forward and reverse reads with quality filtering (Q-score > 25) and merged, which was followed by chimera removal . Taxonomic classification was performed using the naïve Bayes taxonomy classifier manually-trained on Silva (SSU138) 16S rRNA gene database (clustered at 99% similarity; 341 F/805R region) . Unassigned, mitochondria, and chloroplast were excluded before downstream analysis. To reduce the sampling heterogeneity, the ASV table was rarefied to the same reads per sample (20,080 ASVs) with 1,000 times using the ‘q2-repeat-rarefy’ plugin from QIIME2 . Microbial diversity was evaluated within samples (alpha diversity) or between samples (beta diversity) on rarefied ASVs table. Alpha diversity was evaluated using richness (Chao1 estimate), Evenness, Simpson’s index, and Shannon’s index. Beta diversity was evaluated using phylogenetic distance of Weighted UniFrac and Unweighted UniFrac. Prediction of metabolic functions (KEGG modules and pathways) from the rumen microbiota was performed using the PICRUSt2 tool (v.2.4.1) .
Spearman’s rank correlation coefficient (| r | ≥ 0.5, P ≤ 0.5) between microbes at the genus level and metabolites in host serum and rumen fluid, were identified using PROC CORR procedure SAS 9.4. The correlation heatmap was generated using the R software “pheatmap” package. To understand relationship among the taxonomic groups of the major genera (relative abundance ≥ 0.1%) in the OTP and HTP conditions, the co-occurrence network analysis was generated using the ‘FastSpar’ which use SparCC algorithm . Comparison of each exclusive networks were accomplished by use of Co-expression Differential Network Analysis (CoDiNA) . To define network statistics, we used the built-in plugins in Gephi (v. 0.9.2) to calculate measurements of centrality (i.e., eigenvector centrality and authority). The “randomForest (RF)” package in R was used for the RF analysis . The rumen metabolites were used as inputs in the RF model. For each metabolite, a mean decrease accuracy score was calculated based on the increase in error caused by removing that metabolite from the predictors. This score reflects the importance of metabolites in the model. The best predictive model was identified based on the maximum area under the curve (AUC), using the “pROC package” in R . To minimize potential overfitting, we applied a 10-fold cross-validation approach using the “trainControl” package in R . To predict the probabilities of co-occurrence between microbial genera and metabolites in host rumen fluid, we employed microbe-metabolite vectors (mmvec’s) neural network-based approach, which infers the nature of interactions across omics datasets . The interactions between microbes and metabolites were ranked and visualized through the standard dimensionality reduction interface that is implemented as a plugin in QIIME2 (Version 2021.2) . Data normality was analyzed using Shapiro-Wilk test in SAS 9.4 (SAS Institute Inc., NC, USA). Normally distributed data were further analyzed using t -test. For abnormally distributed data, a non-parametric Wilcoxon rank-sum test, and P values were corrected for a false discovery rate using the Benjamini-Hochberg method, with FDR-corrected P < 0.05 being considered significant. The resulting distance matrices served as inputs for principal coordinates analysis (PCoA) and significance of sample clustering was analyzed by permutational multivariate analysis of variance (PERMANOVA) with 9,999 permutations. The differential relative abundances of the rumen microbiota and its predicted metabolic categories were analyzed via linear discriminant analysis effect size (LEfSe) analysis using Galaxy web application . The normalized ASV counts in each sample were used as the input for the LEfSe analysis. Linear discriminant analysis uses a nonparametric factorial Kruskal Wallis and Wilcoxon rank sum test followed by a linear discriminate analysis to estimate the effect size of each taxon. A significance level of P < 0.05 and effect size threshold of 2 were applied in the trial to identify the biomarker taxa. Statistical significance was set to P < 0.05, and a tendency of difference was declared at 0.05 ≤ P ≤ 0.10.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Cytokine | 85867bb1-58c3-4110-835c-0c9dc7611c3c | 8176189 | Histology[mh] | Gastric cancerogenesis is accepted today to be correlated with long-standing chronic inflammation in the stomach mucosa, most frequently due to Helicobacter pylori ( H. pylori ) infection . Both host and H. pylori genetic factors seem to play a crucial role in gastric inflammation and progression toward cancer . “Gastric cancer phenotype” described as corpus predominant gastritis, atrophic gastritis and/or intestinal metaplasia (AGIM), decreased acid secretion or H. pylori infection (present or previous) increases the risk for cancer . On the other hand, autoimmune atrophic gastritis, a condition diagnosed in the presence of corpus gastric atrophy with antrum sparing, can also lead to the development of adenocarcinoma or neuroendocrine neoplasia . Recent meta-analysis support that single nucleotide polymorphisms may be used for the assessment of genetic predisposition to gastric cancer, with ethnicity, environmental factor and cancer subtype contributing to inconsistency in the results . From among all molecular factors playing a role in progression from inflammation toward preneoplastic gastric lesions and cancer, recent research has focused on the possible influence of different type cytokine gene variants . Recently, it was shown that cytokine gene variants are associated with susceptibility to different types of malignancy in a Romanian population . Experimental data support that the CD4+ T cell-derived interferon-gamma (IFN-γ) provides the key stimulus for the development of gastric premalignant lesions . The IFN‐γ +874T>A (rs2430561) polymorphism influencing IFN-γ expression has not been studied in relation to mucosal gastric lesion , . The transforming growth factor-β1 ( TGF-β1 ) gene and its protein play an important role in modulating expression of multiple genes, being involved in inflammatory responses in gastric mucosa and cancer progression . Published meta-analysis did not confirm the association between TGF-β1 +869T>C (rs1800470) and 915G>C (rs1800471) polymorphisms and the risk for gastric cancer , while another indicated that TGF β1 -509C>T rather than +869T>C can increase the risk for gastric cancer . The functional TGF-β1 +869T>C, rs1800470 polymorphisms have been identified to be associated with expression and the level of plasma TGF-β1 protein , , . Tumor necrosis factor-alpha (TNF-α) is a potent immunomodulator and pro-inflammatory cytokine that inhibits gastric acid production, and it is upregulated in the gastric mucosa in response to H. pylori infection . Meta-analysis sustained the association between TNF- α -308G>A (rs1800629) polymorphism and gastric cancer . The variant genotype of TNF -α -308G>A was not associated with gastric atrophy (GA) in European studies , nor in meta-analysis , but has an impact on H. pylori -related gastroduodenal conditions like gastritis, ulcer or cancer . TNF- α -238G>A (rs361525) polymorphism was associated with the increased risk of gastric cancer in Chinese population, not in Caucasians , and was significantly associated with a high risk of gastritis in an African population . Interleukin-6 (IL-6) is a multifunctional cytokine (endocrine and inflammatory mediator) and its polymorphism and expression seem to influence the susceptibility to various diseases, including gastric conditions related to H. pylori infection . Meta-analysis questioning the influence of IL-6 promoter polymorphism did not reveal increased risk for gastric cancer . A study from Brasilia supported that the G allele frequency of IL-6 -174C>G (rs1800795) was higher in patients with gastric cancer than in patients with chronic gastritis , while others did not support any influence on gastric conditions related with H. pylori infection (gastritis, ulcer, adenocarcinoma) . The current concept supports that gastric atrophy can be a result of chronic H. pylori infection or of autoimmune gastritis; in the latter, the sensitized T cells and autoantibodies are the key factors of the pathological process . H. pylori infection and autoimmune atrophic gastritis face overlapping biological characteristics and the germ itself may accelerate progression toward atrophy in individuals with a particular genetic background . To the best of our knowledge there are no published studies investigating the importance of cytokine gene variants with a role in the inflammatory response of gastric mucosa on the extension of AGIM with negative biopsies for active H. pylori related gastritis. Based on all the observations cited above, the present study focused on assessing the associations of the IFN‐γ, TGF‐β1 , TNF-α , and IL‐6 gene polymorphisms with a possible role in clinical course of gastric inflammatory response, with the histologic extent of AGIM in patients without active inflammatory cell infiltrate on histology (mononuclear and neutrophilic). We questioned the possible influence of inflammatory cytokine polymorphism on localization or extension of premalignant gastric lesions leading to different susceptibility for gastric cancer, irrespective of triggered mucosal aggression. We present the following article in accordance with the STROBE reporting checklist.
Ethical consideration The Ethical Committee of Targu Mures County Emergency Clinical Hospital (10846/15.04.2019) and of George Emil Palade University of Medicine, Pharmacy, Science and Technology of Targu Mures, Romania (282/19.07.2019) approved the study. Study sample We conducted a single-center observational study in 224 adult consecutive patients, in whom an upper digestive endoscopy was performed in Medical Clinic II -Targu Mures County Emergency Clinical Hospital. A structured direct interview was applied after the informed consent was obtained. We questioned smoking and alcohol consumption, present symptoms (pain and/or heartburns and/or nausea and/or regurgitation), and some non-recorded data of the past medical history. The medical records of subjects were checked for previous symptoms, diagnosis, investigations or treatments for peptic ulcer disease and/or H. pylori eradication therapies, as well as for concomitant diseases (hypertension, cardiac, respiratory, kidney or liver diseases, stroke, diabetes mellitus, atherosclerosis, dyslipidemia, osteoarticular diseases, other chronic medical conditions) or treatments with potential gastric effect (protective or aggressive). Patients drinking at least 10 Units (1 Unit=10 ml of pure alcohol) weekly were considered drinkers. Subjects reporting consumption of less than 10 Units of pure alcohol per week were considered non-drinkers. If the patients used to smoke 5 or more cigarettes/d, including recent quitters (last 5 years), they were considered as smokers. Non-steroidal non-aspirin anti-inflammatory drugs (commonly known as NSAIDs) consumption was considered if the patients took regular daily doses of over-the-counter drugs or based on medical prescription, for more than 1 mo. We recorded the use of antiplatelet dose of aspirin (75-125 mg/d) or clopidogrel 75 mg/d for more than 1 mo. Patients were considered on acenocumarolum (ACO) therapy if they used regular doses for a therapeutic international normalized ratio for at least 2 week before endoscopy. Patients were considered exposed to proton pump inhibitors (PPI) (omeprazole, pantoprazole, esomeprazole) if they used regular doses within the last month, irrespective of the type of administration (continuous or on-demand). Exclusion criteria were: 1. Incomplete set of histological or clinical data; 2. Active H. pylori related gastritis on histology using immunohistochemistry study; 3. Previous gastric surgery; 4. Active bleeding during endoscopy requiring hemostatic therapy; 5. Advanced or end-stage digestive disease (cirrhosis, esophageal varices); 6. Dysplasia or gastric cancer. Pathology At least four biopsies (two from the antrum and two from the gastric body, both from the lesser and the greater curvature) were routinely analyzed. The cases with absence of H. pylori infection in all biopsies on microscopy after staining tissues with hematoxylin-eosin, periodic acid Schiff-alcian blue and Giemsa were considered negative. If the germ was present in at least one site, the case was considered H. pylori- positive and it was excluded. If H. pylori infection was suspicioned (abundant inflammatory cells, extensive intestinal metaplasia), an immunohistochemistry study was performed, especially in patients on PPI therapy and the case was also excluded if infection was confirmed. The Updated Sydney System was used to assess the degree of mucosal chronic inflammation and activity, H. pylori infection, glandular atrophy, and intestinal metaplasia. The lack of biopsies from incisura in some patients did not allow us to use de OLGA/OLGIM system to quantify the severity of premalignant lesions. Moreover, we did not intend to study the association of the SNPs, clinical and endoscopic variables with the severity of premalignant lesions, but only with their presence or absence. In this study, we focused on the presence of AGIM in any part of the stomach. Antrum-limited AGIM was defined as the presence of AGIM in the antrum, with a healthy stomach corpus. Corpus-limited AGIM was defined as the presence of AGIM in the corpus, with a healthy antrum. Extended AGIM was defined as the simultaneous histological presence of AGIM in the antrum and corpus of the stomach. In patients with corpus limited changes, enterocromaffin-like cell hyperplasia and suspicion of autoimmune gastritis, further studies were performed to confirm the diagnosis (anti parietal cell antibody and anti-intrinsic factor, B12 vitamin and folate serum level). Patients with other abnormal histological changes were excluded. Genetic study Rapid extraction of genomic DNA from whole blood samples stored in EDTA tubes was performed by using PureLink Genomic DNA Mini Kits (ThermoFisher Scientific, Waltham, MA, United States). For Taq Man SNP genotyping, we used Taq Man Fast Advanced Master Mix and the assay for TNF-α rs361525 and the 7500 Fast Dx Real ‐ time PCR System (ThermoFisher Scientific). TGF‐β1 rs1800470, TNF-α rs1800629, IFN‐γ rs2430561, and IL‐6 rs1800795 SNPs were evaluated by using the previously described amplification - refractory mutation system (commonly known as the ARMS‐PCR) technique . Statistical analysis Statistical analysis was performed using the R software (version 3.6.1). Distribution of observed and expected genotypes of studied gene polymorphisms was tested for consistency with Hardy-Weinberg equilibrium and linkage equilibrium using the SNPassoc R package . Demographic variables, such as age, were described by mean and standard deviation, while other clinical categorical factors were shown using absolute frequencies and percentages. Comparisons between demographic and clinical factors among patients with different extensions of AGIM were performed using the Student's t , ANOVA, chi-square or Fisher's exact test. The differences in genotype frequencies of IFN‐γ , TGF‐β1, TNF‐α, and IL‐6 gene polymorphisms among patients with different extensions of AGIM were tested using chi-square or Fisher's exact test. In the case of a significant result ( P -value < 0.05), in order to identify the pattern of differences, we also performed the pairwise comparisons using chi-square or Fisher's exact test, and then the P -values were adjusted using the Benjamini-Hochberg method . Binomial and multinomial logistic regression analysis with adjustment for age, gender, and current smoking were performed to estimate the association between variant genotype of IFN‐γ, TGF‐β1, TNF‐α, and IL‐6 gene polymorphisms and extension of the presence of AGIM. The association between each of the studied gene polymorphisms was expressed in terms of the odds ratio (OR) and the corresponding 95% confidence interval (CI). The estimated ORs were obtained using the VGAM R package . The regression results were considered statistically significant if 95%CI for OR did not contain unity or if two-tailed P -values obtained from Wald z -tests were lower than the significance level of 0.05.
The Ethical Committee of Targu Mures County Emergency Clinical Hospital (10846/15.04.2019) and of George Emil Palade University of Medicine, Pharmacy, Science and Technology of Targu Mures, Romania (282/19.07.2019) approved the study.
We conducted a single-center observational study in 224 adult consecutive patients, in whom an upper digestive endoscopy was performed in Medical Clinic II -Targu Mures County Emergency Clinical Hospital. A structured direct interview was applied after the informed consent was obtained. We questioned smoking and alcohol consumption, present symptoms (pain and/or heartburns and/or nausea and/or regurgitation), and some non-recorded data of the past medical history. The medical records of subjects were checked for previous symptoms, diagnosis, investigations or treatments for peptic ulcer disease and/or H. pylori eradication therapies, as well as for concomitant diseases (hypertension, cardiac, respiratory, kidney or liver diseases, stroke, diabetes mellitus, atherosclerosis, dyslipidemia, osteoarticular diseases, other chronic medical conditions) or treatments with potential gastric effect (protective or aggressive). Patients drinking at least 10 Units (1 Unit=10 ml of pure alcohol) weekly were considered drinkers. Subjects reporting consumption of less than 10 Units of pure alcohol per week were considered non-drinkers. If the patients used to smoke 5 or more cigarettes/d, including recent quitters (last 5 years), they were considered as smokers. Non-steroidal non-aspirin anti-inflammatory drugs (commonly known as NSAIDs) consumption was considered if the patients took regular daily doses of over-the-counter drugs or based on medical prescription, for more than 1 mo. We recorded the use of antiplatelet dose of aspirin (75-125 mg/d) or clopidogrel 75 mg/d for more than 1 mo. Patients were considered on acenocumarolum (ACO) therapy if they used regular doses for a therapeutic international normalized ratio for at least 2 week before endoscopy. Patients were considered exposed to proton pump inhibitors (PPI) (omeprazole, pantoprazole, esomeprazole) if they used regular doses within the last month, irrespective of the type of administration (continuous or on-demand). Exclusion criteria were: 1. Incomplete set of histological or clinical data; 2. Active H. pylori related gastritis on histology using immunohistochemistry study; 3. Previous gastric surgery; 4. Active bleeding during endoscopy requiring hemostatic therapy; 5. Advanced or end-stage digestive disease (cirrhosis, esophageal varices); 6. Dysplasia or gastric cancer.
At least four biopsies (two from the antrum and two from the gastric body, both from the lesser and the greater curvature) were routinely analyzed. The cases with absence of H. pylori infection in all biopsies on microscopy after staining tissues with hematoxylin-eosin, periodic acid Schiff-alcian blue and Giemsa were considered negative. If the germ was present in at least one site, the case was considered H. pylori- positive and it was excluded. If H. pylori infection was suspicioned (abundant inflammatory cells, extensive intestinal metaplasia), an immunohistochemistry study was performed, especially in patients on PPI therapy and the case was also excluded if infection was confirmed. The Updated Sydney System was used to assess the degree of mucosal chronic inflammation and activity, H. pylori infection, glandular atrophy, and intestinal metaplasia. The lack of biopsies from incisura in some patients did not allow us to use de OLGA/OLGIM system to quantify the severity of premalignant lesions. Moreover, we did not intend to study the association of the SNPs, clinical and endoscopic variables with the severity of premalignant lesions, but only with their presence or absence. In this study, we focused on the presence of AGIM in any part of the stomach. Antrum-limited AGIM was defined as the presence of AGIM in the antrum, with a healthy stomach corpus. Corpus-limited AGIM was defined as the presence of AGIM in the corpus, with a healthy antrum. Extended AGIM was defined as the simultaneous histological presence of AGIM in the antrum and corpus of the stomach. In patients with corpus limited changes, enterocromaffin-like cell hyperplasia and suspicion of autoimmune gastritis, further studies were performed to confirm the diagnosis (anti parietal cell antibody and anti-intrinsic factor, B12 vitamin and folate serum level). Patients with other abnormal histological changes were excluded.
Rapid extraction of genomic DNA from whole blood samples stored in EDTA tubes was performed by using PureLink Genomic DNA Mini Kits (ThermoFisher Scientific, Waltham, MA, United States). For Taq Man SNP genotyping, we used Taq Man Fast Advanced Master Mix and the assay for TNF-α rs361525 and the 7500 Fast Dx Real ‐ time PCR System (ThermoFisher Scientific). TGF‐β1 rs1800470, TNF-α rs1800629, IFN‐γ rs2430561, and IL‐6 rs1800795 SNPs were evaluated by using the previously described amplification - refractory mutation system (commonly known as the ARMS‐PCR) technique .
Statistical analysis was performed using the R software (version 3.6.1). Distribution of observed and expected genotypes of studied gene polymorphisms was tested for consistency with Hardy-Weinberg equilibrium and linkage equilibrium using the SNPassoc R package . Demographic variables, such as age, were described by mean and standard deviation, while other clinical categorical factors were shown using absolute frequencies and percentages. Comparisons between demographic and clinical factors among patients with different extensions of AGIM were performed using the Student's t , ANOVA, chi-square or Fisher's exact test. The differences in genotype frequencies of IFN‐γ , TGF‐β1, TNF‐α, and IL‐6 gene polymorphisms among patients with different extensions of AGIM were tested using chi-square or Fisher's exact test. In the case of a significant result ( P -value < 0.05), in order to identify the pattern of differences, we also performed the pairwise comparisons using chi-square or Fisher's exact test, and then the P -values were adjusted using the Benjamini-Hochberg method . Binomial and multinomial logistic regression analysis with adjustment for age, gender, and current smoking were performed to estimate the association between variant genotype of IFN‐γ, TGF‐β1, TNF‐α, and IL‐6 gene polymorphisms and extension of the presence of AGIM. The association between each of the studied gene polymorphisms was expressed in terms of the odds ratio (OR) and the corresponding 95% confidence interval (CI). The estimated ORs were obtained using the VGAM R package . The regression results were considered statistically significant if 95%CI for OR did not contain unity or if two-tailed P -values obtained from Wald z -tests were lower than the significance level of 0.05.
Study sample Among the 224 consecutive patients included in the study, the frequency of AGIM was 73 (32.6%) cases. The histopathological investigation revealed that the extension of AGIM was the following: 37 (50.7%) patients had antrum-limited AGIM; 21 (28.8%) patients had corpus-limited AGIM; and 15 (20.5%) patients had extended AGIM, involving both antrum and corpus. The main demographic and clinical variables are presented in Table . The mean age of studied patients was 62.3 ± 12.7 years, with range of age varying between a minimum of 20 years to a maximum of 85 years (significant differences found when the group with AGIM was compared to patients with no AGIM; Student's t -test, P = 0.004). We noticed that mean age of patients with AGIM was higher than of patients with no AGIM (65.52 ± 10.47 vs 60.79 ± 13.37 years). The distribution of gender was similar between the groups with and without AGIM (chi-square test, P = 0.776). Although we observed that among patients with different histologic extensions of AGIM, the frequency of gastrotoxic drugs consumption (aspirin, non-aspirin NSAIDs) was higher for patients with antrum-limited AGIM ( n = 19; 51.4%) versus corpus-limited AGIM ( n = 8; 38.1%), versus extended AGIM ( n = 7; 46.7%), the observed differences were not statistically significant (chi-square test, P = 0.733). The frequency of gastro-protective (PPI) drugs was similar according to different histologic extensions of AGIM: 105 (69.5%) for no AGIM vs 27 (73.0%) for antrum-limited AGIM vs 13 (61.9%) for corpus-limited AGIM vs 10 (66.7%) for extended AGIM (chi-square test, P = 0.874). There was a significant association between ACO consumption and localization of AGIM (Fisher's exact test, P = 0.042), with the pairwise comparisons showing that the frequency of ACO consumption was higher among patients with corpus-limited AGIM than those with no AGIM (Fisher's exact test, P = 0.007, adjusted P = 0.041). There were no significant differences regarding the frequency distribution in any of digestive symptoms for patients with antrum-limited AGIM, corpus-limited and extended AGIM (chi-square test, P = 0.168 for abdominal pain, P = 0.125 for pyrosis; Fisher's exact test, P = 0.624 for nausea and P = 0.895 for regurgitation; chi-square test, P = 0.230 for bloating). There was a significant association between smoking and distribution pattern of AGIM (Fisher's exact test, P = 0.026), the post-hoc analysis identifying significant differences between patients with extended AGIM and those with antrum-limited AGIM ( P = 0.015, adjusted P = 0.091) and those with corpus-limited AGIM (Fisher's exact test, P = 0.032, adjusted P = 0.097). Hardy-Weinberg equilibrium for IFN-γ, TGF-β1, TNF-α and IL-6 gene polymorphisms The studied SNPs were tested for the condition of Hardy-Weinberg equilibrium. The results showed that genotype distribution of observed and expected frequency of each SNP did not differ significantly in patients with and without AGIM, all the studied SNPs (excepting TNF-α -238G>A, for which we found only two genotypes) being in agreement with Hardy-Weinberg equilibrium expectation; for the AGIM group and for patients without AGIM: IFN-γ +874T>A: P = 0.1543 and P = 0.6204; TGF-β1 869T>C: P = 0.8533 and P = 1.000; TNF-α -308G>A: P = 0.4437 and P = 0.1332; and IL-6 - 174C>G: P = 1.0000 and P = 0.4968). We also tested linkage disequilibrium between TNF-α -308G>A and -238G>A SNPs and found no significant deviation from linkage disequilibrium ( P = 0.801, D' = 0.047, r 2 = 0.012). Association of TGF-β1, TNF-α, IFN-γ, and IL-6 gene polymorphisms with histologic extensions of AGIM Table summarizes the frequencies of studied SNPs' genotypes in relation to the histologic extension of AGIM. There was no significant difference regarding variant genotype and wild-type genotype frequencies among patients with different localization of AGIM, except for TGF-β1 +869T>C gene polymorphism (chi-square test, P = 0.031). In addition, we noticed that the variant genotype TGF-β1 +869T>C gene polymorphism occurred less frequently among patients with corpus-limited AGIM ( n = 7, 33.3%) and extended AGIM ( n = 5, 33.3%) than those with antrum-limited AGIM ( n = 25, 67.6%). As described in Table , the dominant inheritance models showed a significant association only for TGF-β1 +869T>C gene polymorphism with decreased risk of corpus-affected AGIM (adjusted OR= 0.42, 95%CI: 0.19- 0.93, P = 0.032).
Among the 224 consecutive patients included in the study, the frequency of AGIM was 73 (32.6%) cases. The histopathological investigation revealed that the extension of AGIM was the following: 37 (50.7%) patients had antrum-limited AGIM; 21 (28.8%) patients had corpus-limited AGIM; and 15 (20.5%) patients had extended AGIM, involving both antrum and corpus. The main demographic and clinical variables are presented in Table . The mean age of studied patients was 62.3 ± 12.7 years, with range of age varying between a minimum of 20 years to a maximum of 85 years (significant differences found when the group with AGIM was compared to patients with no AGIM; Student's t -test, P = 0.004). We noticed that mean age of patients with AGIM was higher than of patients with no AGIM (65.52 ± 10.47 vs 60.79 ± 13.37 years). The distribution of gender was similar between the groups with and without AGIM (chi-square test, P = 0.776). Although we observed that among patients with different histologic extensions of AGIM, the frequency of gastrotoxic drugs consumption (aspirin, non-aspirin NSAIDs) was higher for patients with antrum-limited AGIM ( n = 19; 51.4%) versus corpus-limited AGIM ( n = 8; 38.1%), versus extended AGIM ( n = 7; 46.7%), the observed differences were not statistically significant (chi-square test, P = 0.733). The frequency of gastro-protective (PPI) drugs was similar according to different histologic extensions of AGIM: 105 (69.5%) for no AGIM vs 27 (73.0%) for antrum-limited AGIM vs 13 (61.9%) for corpus-limited AGIM vs 10 (66.7%) for extended AGIM (chi-square test, P = 0.874). There was a significant association between ACO consumption and localization of AGIM (Fisher's exact test, P = 0.042), with the pairwise comparisons showing that the frequency of ACO consumption was higher among patients with corpus-limited AGIM than those with no AGIM (Fisher's exact test, P = 0.007, adjusted P = 0.041). There were no significant differences regarding the frequency distribution in any of digestive symptoms for patients with antrum-limited AGIM, corpus-limited and extended AGIM (chi-square test, P = 0.168 for abdominal pain, P = 0.125 for pyrosis; Fisher's exact test, P = 0.624 for nausea and P = 0.895 for regurgitation; chi-square test, P = 0.230 for bloating). There was a significant association between smoking and distribution pattern of AGIM (Fisher's exact test, P = 0.026), the post-hoc analysis identifying significant differences between patients with extended AGIM and those with antrum-limited AGIM ( P = 0.015, adjusted P = 0.091) and those with corpus-limited AGIM (Fisher's exact test, P = 0.032, adjusted P = 0.097).
The studied SNPs were tested for the condition of Hardy-Weinberg equilibrium. The results showed that genotype distribution of observed and expected frequency of each SNP did not differ significantly in patients with and without AGIM, all the studied SNPs (excepting TNF-α -238G>A, for which we found only two genotypes) being in agreement with Hardy-Weinberg equilibrium expectation; for the AGIM group and for patients without AGIM: IFN-γ +874T>A: P = 0.1543 and P = 0.6204; TGF-β1 869T>C: P = 0.8533 and P = 1.000; TNF-α -308G>A: P = 0.4437 and P = 0.1332; and IL-6 - 174C>G: P = 1.0000 and P = 0.4968). We also tested linkage disequilibrium between TNF-α -308G>A and -238G>A SNPs and found no significant deviation from linkage disequilibrium ( P = 0.801, D' = 0.047, r 2 = 0.012).
Table summarizes the frequencies of studied SNPs' genotypes in relation to the histologic extension of AGIM. There was no significant difference regarding variant genotype and wild-type genotype frequencies among patients with different localization of AGIM, except for TGF-β1 +869T>C gene polymorphism (chi-square test, P = 0.031). In addition, we noticed that the variant genotype TGF-β1 +869T>C gene polymorphism occurred less frequently among patients with corpus-limited AGIM ( n = 7, 33.3%) and extended AGIM ( n = 5, 33.3%) than those with antrum-limited AGIM ( n = 25, 67.6%). As described in Table , the dominant inheritance models showed a significant association only for TGF-β1 +869T>C gene polymorphism with decreased risk of corpus-affected AGIM (adjusted OR= 0.42, 95%CI: 0.19- 0.93, P = 0.032).
Our study investigated the environmental factors and their effects, as well as the IFN‐γ, TGF‐β1 , TNF‐α and IL‐6 gene polymorphisms of cytokines involved in the immune response, on patients with different extent of AGIM without active H. pylori infection on histology. Identification of cytokines' functions and their cellular actions has increased during the last years, and our understanding of the link between inflammation and gastric carcinogenesis has advanced greatly. Even though gastritis associated with H. pylori infection mainly involving the antrum is distinct from atrophic autoimmune gastritis (implicating the gastric corpus because the inflammatory process affects the parietal gastric cells), the correlation between the two is still controversial . A recent meta-analysis supported that autoimmune mechanisms might exacerbate H. pylori gastritis in the absence of classical biological features of autoimmune gastritis . We did not intend in our study to clarify the etiology of gastric atrophy or metaplasia, but we started the research in order to investigate possible inflammatory gene polymorphisms that may modulate the increasing cases of H. pylori -negative biopsies in dyspeptic or anemic patients undergoing endoscopy, probably due to unintentional eradication or spontaneous disappearance of the germ in “old” atrophic gastritis . The corpus-limited lesions group include both patients with confirmed autoimmune gastritis or not. The frequency of AGIM in the studied sample (32.6%) was three-times higher than the prevalence of intestinal metaplasia reported by Huang et al. in a recent retrospective large American study of 17,710 biopsies, and also higher than that reported in European studies (one in four biopsies of patients undergoing gastroscopy) . Both genetic and environmental factors are accepted to play a role in this discrepancy, including the prevalence of H. pylori infection . There were not significant differences regarding the symptoms or history of ulcer in patients with or without AGIM, nor in patients with different extent of histologic lesions. NSAID or aspirin consumption were not different in the studied groups, while ACO therapy was more frequent in patients with corpus-limited AGIM than in those with no AGIM. As it was reported, the use of ACO was frequently associated with the extension of preneoplastic lesions . Vitamin B12 and folic acid deficiency occurring in corporeal autoimmune gastritis are associated with variable increased homocysteine levels , which seem to increase the risk for thrombotic events . Even though the association remains controversial , the link between atrophic gastritis and thrombotic risk should be further investigated. Regarding the relation between smoking and histologic extent of AGIM, our findings revealed that smoking was associated with AGIM, as in other similar studies of dyspeptic patients , or with endoscopic lesions. The underlying mechanisms seem to be related to the influence on mucosal cell death, proliferation, decreased blood flow, or modulation of the immune responses in gastric mucosa . Our data showed that IFN-γ +874T>A, TNF-α -308G>A and -238G>A, IL-6 - 174C>G polymorphisms were not associated with the extent of AGIM in patients without active H. pylori infection immunohistochemically assessed. Similar results were observed in gastric cancer , , , and in gastric atrophy in Europeans , , but no data are available for patients without active H. pylori infection. From among all the polymorphisms studied, only the TGF-β1 +869T>C SNP was associated with localization of premalignant gastric lesions in patients without H. pylori infection. The multifunctional TGF-β1 proteins were proved to control cell growth, proliferation, differentiation and apoptosis and their role on carcinogenesis was extensively studied . In our study, the variant genotype TGF-β1 +869T>C was associated with a protective effect against corporeal localization of AGIM. The findings suggest the possible effect of TGF-β1 +869T>C, rs1800470 polymorphisms on modulation of the host immune response in gastric corpus mucosa. It is accepted today that H. pylori infection might mediate aggression against the proton pump, leading to corporeal atrophic mucosal changes similar to those of primary gastric autoimmunity, disappearing from altered mucosa . This overlapped pathogenic mechanism of gastric corpus atrophy might be influenced by the TGF-β1 +869T>C SNP, modulating the cytokine activity that plays roles in gastric inflammation via regulatory mechanisms. Our study is the first one questioning the role of cytokine polymorphisms in premalignant gastric lesions in patients with unintentional or spontaneous disappearance of H. pylori infection in gastric biopsies or with autoimmune gastritis, as recent studies underline the important roles for cytokines in regulating corporeal atrophy, hyperplasia, different types of metaplasia, and gastric carcinogenesis , Our study opens the possible direction of research for developing alternative biologic markers for assessment of risk for premalignant gastric lesions and associations with other medical conditions. One limitation of the study is the lack of data regarding plasma cytokines (IFN‐γ, TGF‐β1, TNF-α, and IL‐6 ). The second limitation of our study was no separate estimation of the risk for both corpus-limited and extended AGIM, due to reduced numbers of cases with variant genotypes of the studied SNPs. Although the TNF-α -238G>A polymorphism TGF-β1 +869T>C gene polymorphism was significantly associated with decreased risk of corpus-affected AGIM, the results should be regarded with caution due to the small frequencies of variant genotype and the clinical significance should be retested on a larger sample. The final regression model does not include the TNF-α- 238G>A polymorphism due to a lack of cases with the variant genotype and extension of AGIM into the gastric corpus and it was the third limitation of the present study. Further studies should investigate the effect of TGF-β1 +869T>C, rs1800470 polymorphisms on chronic gastritis occurrence in a specific population. The fourth limitation was the small sample size, that did not allow for the development of a multivariable logistic model to study the clinical predictors together with IFN‐γ, TGF‐β1, TNF-α, and IL‐6 gene for different localization or extension of AGIM. The small sample size of each group based on AGIM localization was also related to the wide confidence intervals of estimated parameters (cOR, aOR) so further studies with larger sample sizes should be made to get a greater precision of findings and to investigate the ability of multivariable clinical and genetic model to predict the extent of AGIM, as well as the importance of the past or present H. pylori infection.
In patients without active H. pylori gastritis in biopsy samples, the TGF-β1 +869T>C gene polymorphism was associated with a decreased risk for corporeal localization of AGIM. The dominant inheritance models revealed no significant association for IFN-γ +874T>A, TNF‐α -308G>A and IL-6 - 174C>G gene polymorphism with the risk of localization of AGIM. Higher consumption of ACO was observed in patients with corpus-limited precancerous lesions, while symptoms were not associated with localization of premalignant lesions. In patients without active H. pylori infection smoking was associated with extended AGIM.
|
Data-driven cluster analysis identifies distinct types of metabolic dysfunction-associated steatotic liver disease | a4506cc7-8542-4c04-87e7-d8ba1253ca97 | 11645276 | Biochemistry[mh] | Nonalcoholic fatty liver disease, now referred to as metabolic dysfunction-associated steatotic liver disease (MASLD) , , is currently the most common chronic liver disease worldwide, with an estimated global prevalence of approximately 30% (ref. ). MASLD comprises a spectrum of disorders ranging from isolated steatosis to metabolic dysfunction-associated steatohepatitis (MASH), ultimately leading to advanced fibrosis, cirrhosis and hepatocellular carcinoma . However, not every individual diagnosed with MASLD will progress to MASH and later stages of liver disease, indicating the presence of a substantial interindividual variation in the disease progression . Furthermore, MASLD harbors an increased risk of cardiovascular disease and type 2 diabetes , , which also widely varies among individuals. This interindividual variability in the severity and progression of MASLD and its extrahepatic consequences, together with the challenges of finding a specific drug treatment, highlight the need for more personalized approaches – . Given this context, advancements in diagnostic strategies for risk stratification and efficient testing of new drugs in at-risk populations are urgently needed . Emerging evidence points to the clinical relevance of distinguishing different types of MASLD on the basis of distinct pathophysiological mechanisms and rates of disease progression . For example, genetic predisposition to hepatic steatosis is associated with increased risk of liver-related events, while offering protection against coronary artery disease , . Specifically, PNPLA3 rs738409 (p.I148M), the strongest genetic variant predisposing to MASLD, is associated with a reduction in intrahepatic turnover of lipids droplets but is not causally linked to ischemic heart disease in individuals with MASLD . In contrast, other mechanisms central to MASLD pathophysiology, such as hepatic de novo lipogenesis or adipose tissue dysfunction, have been associated with insulin resistance and a higher risk for type 2 diabetes and cardiovascular disease, but with only a moderate risk of liver-related events . In the present study, we identified two types of MASLD by using a data-driven clustering approach focused on key hepatic and cardiometabolic traits. These two MASLD types have distinct biological profiles and risks for cardiometabolic disease and diabetes, despite having the same severity of MASLD on liver histology. We then clustered four independent cohorts of individuals at-risk for MASLD from Italy, Finland, Belgium and the United Kingdom, with consistent results, supporting the validity of the proposed clustering.
Cluster analysis identifies two distinct types of MASLD Cluster analysis and identification of MASLD types were performed on the basis of the data of 1,389 French participants from the Atlas Biologique de l’Obésité Sévère (ABOS) cohort (Extended Data Fig. ). Overall, we identified six clusters with distinctive patterns of the six clustering variables in the ABOS cohort (Fig. ). We then added patients from three independent cohorts to these clusters, namely, the Universitair Ziekenhuis Antwerpen (UZA) cohort from Belgium ( n = 463), the Molecular Architecture of FAtty Liver Disease in individuals with obesity undergoing bAriatric surgery (MAFALDA) cohort from Italy ( n = 261) and the Helsinki cohort from Finland ( n = 375) (Extended Data Fig. ). Due to the low number of participants in some individual clusters across cohorts, we pooled the three cohorts for the following analyses, resulting in a consolidated cohort of 1,099 individuals, referred to hereafter as the validation cohort (Fig. ). In the ABOS cohort, cluster 1 contained 18% of participants and was characterized by older age and hypertension; cluster 2 included 11% of participants and had the highest hemoglobin A1c (HbA1c), high triglycerides and hypertension; cluster 3 had 13% of participants, young age and the highest body mass index (BMI); cluster 4 had 26% of participants and the highest low-density lipoprotein (LDL) cholesterol levels; cluster 5 had 7% of participants and the highest alanine aminotransferase (ALT) levels; and cluster 6 had 24% of participants and a majority of females with a more favorable metabolic profile (Fig. and Extended Data Table ). Despite marked differences in age and prevalence of type 2 diabetes between clusters 2 and 5, liver histology revealed high prevalence of MASH and advanced fibrosis ( F ≥ 3) in these two subgroups, as compared with other clusters combined: 33.6% and 24.2% versus 5.0%, and 21.8% and 15.8% versus 3.4%, respectively (all adjusted P < 0.001 versus other clusters combined). To further examine the potential differences in mechanisms driving MASH, we pooled the clusters with lower severity of MASLD (clusters 1, 3, 4 and 6) in a ‘control’ cluster, which was compared with cluster 2 and cluster 5 (Fig. and Table ). To replicate these findings, we then assigned the participants of the three validation cohorts with liver histology (UZA, MAFALDA and Helsinki) to the same subgroups, based on which cluster they were most similar to. Results showed similar distributions of clusters across the three cohorts (Figs. and , and Extended Data Fig. ). Like in the ABOS cohort, the potential cardiometabolic cluster (cluster 2), characterized by the highest HbA1c, hypertension and dyslipidemia, and the liver-specific cluster (cluster 5), characterized by the highest ALT, were similarly enriched in participants presenting more severe histological features of MASLD, including MASH and liver fibrosis. We further confirmed the association of the cardiometabolic and liver-specific clusters with at-risk liver phenotype in a subset of the UK Biobank participants ( n = 6,792) who underwent liver magnetic resonance imaging (MRI). Consistent with what was observed with histology in the ABOS cohort, the cardiometabolic and liver-specific clusters in the UK Biobank were similarly enriched in participants presenting typical features of hepatic steatosis (proton density fat fraction (PDFF) >5.5%) and MASH (PDFF >5.5% and iron-corrected T1 (cT1) >800 ms) (Fig. and Extended Data Table ). The liver-specific cluster is enriched in at-risk genetic variants MASLD has a strong genetic component with variants in PNPLA3 , TM6SF2 , MBOAT7 and GCKR accounting for a large fraction of its heritability and accelerating liver disease progression to MASH, cirrhosis and hepatocellular carcinoma – . We hypothesized that the liver-specific cluster could be enriched in these genetic variants. Therefore, we examined the difference of polygenic risk score of hepatic fat content (PRS-HFC) distribution in the liver-specific cluster 5 compared with the cardiometabolic and control clusters in ABOS, finding an enrichment of PRS-HFC in this cluster (adjusted P = 0.034 and adjusted P < 0.001 versus the cardiometabolic and control clusters, respectively) (Table ). Results were similar when we considered only the PNPLA3 rs738409 variant ( P < 0.01 and P < 0.001 versus the cardiometabolic and control clusters, respectively) (Fig. ). These results were confirmed in UK Biobank participants (Extended Data Table ). Risk of liver and cardiovascular outcomes, and type 2 diabetes In the UK Biobank, individuals allocated in the six clusters exhibited similar characteristics to those observed in the ABOS cohort (Extended Data Table and Extended Data Fig. ). During a median (interquartile range) follow-up of 13.4 (12.6–14.1) years, there were 2,676 (1.12%) individuals who developed chronic liver disease, with the liver-specific and cardiometabolic clusters being the ones with the highest cumulative incidence (both P < 0.001 versus control cluster) (Fig. and Extended Data Table ). Following adjustment for age, sex and alcohol intake, the liver-specific and cardiometabolic clusters had a more than fourfold increased risk of chronic liver disease compared with the control cluster (adjusted hazard ratio (HR) 4.52, 95% confidence interval (CI) 3.88–5.26, P < 0.001, and adjusted HR 4.04, 95% CI 3.50–4.66, P < 0.001, respectively) (Fig. ). During a median (interquartile range) follow-up of 13.4 (12.7–14.1) years, there were 20,721 (10.59%) individuals who developed cardiovascular disease, with the cardiometabolic cluster being the one with the highest cumulative incidence: 21.88% in the cardiometabolic cluster versus 10.37% in the control cluster (HR 2.31, 95% CI 2.16–2.47; P < 0.001 versus control), and 9.52% in the liver-specific cluster (HR 0.91, 95% CI 0.82–1.00; P = 0.054 versus control) (Fig. and Extended Data Table ). When the analysis was adjusted for age, sex and alcohol intake, the cardiometabolic cluster had a significantly increased risk of experiencing cardiovascular disease compared with the control cluster (adjusted HR 1.80, 95% CI 1.68–1.93; P < 0.001), which was also significantly higher than the increase in risk of the liver-specific cluster compared with the control cluster (adjusted HR 1.18, 95% CI 1.07–1.31; P = 0.001) (Fig. ). During a median (interquartile range) follow-up of 13.3 (12.6–14.1) years, there were 8,563 (4.35%) individuals who developed type 2 diabetes, with the cardiometabolic cluster being the one with the highest cumulative incidence ( P < 0.001 versus both liver-specific and control clusters) (Fig. and Extended Data Table ). Following adjustment for age, sex and alcohol intake, the cardiometabolic cluster had a nearly sevenfold increased risk of developing type 2 diabetes compared with the control cluster (adjusted HR 6.82, 95% CI 6.01–7.73; P < 0.001), which was higher than the increase in risk of the liver-specific cluster compared with the control cluster (adjusted HR 2.91, 95% CI 2.62–3.23; P < 0.001) (Fig. ). Of note, a majority of participants from the cardiometabolic cluster also presented with type 2 diabetes, which may explain the higher risk of cardiovascular disease observed in this cluster. Likewise, the mean HbA1c level remained superior in the cardiometabolic cluster after excluding patients with preexisting type 2 diabetes for analyzing incident diabetes (Extended Data Table ). However, adjusting for HbA1c did not fully remove the association of the cardiometabolic cluster with type 2 diabetes risk. Sensitivity analyses excluding individuals with BMI <27 kg m −2 or those with excessive alcohol consumption (>50/60 g per day for women/men) showed similar results to the main analysis (Extended Data Table ). In summary, the cardiometabolic cluster had a higher risk of developing cardiovascular disease and type 2 diabetes, and a similar risk of developing chronic liver disease, as compared with the liver-specific cluster. The added value of clustering beyond individual variables We then explored the added value of the proposed clustering, beyond each of its individual components, to predict the various clinical outcomes. For that purpose, for each outcome, we first examined the overall predictive power of each variable of interest compared with clustering alone. No individual variable performed better than clustering at predicting simultaneously the three clinical outcomes (Extended Data Table ). For example, ALT alone predicted incident chronic liver disease better than clustering, but clustering was superior at predicting cardiovascular disease. In contrast, HbA1c predicted incident cardiovascular disease better than clustering, but clustering performed better in the prediction of chronic liver disease. Likewise, among patients without diabetes at the time of inclusion, age, BMI, HbA1c, ALT and triglycerides performed better in predicting the risk of incident diabetes better than clustering alone. In contrast, clustering did better than LDL cholesterol alone at predicting all outcomes. Second, we performed multivariable analyses, in which the clustering model was first adjusted for sex, age and alcohol use, and second, one by one, ALT, HbA1c, triglycerides, BMI or LDL cholesterol (Fig. ). Although in most cases the HR estimates of at-risk clusters were reduced after further adjustment for one other clustering variable, all values remained statistically significant compared with the control cluster in at least one at-risk cluster for each outcome. Collectively, these data show that clustering was superior to each individual variable in predicting simultaneously all three clinical trajectories. Differential liver transcriptomic analysis across clusters To gain insights into the biological differences between the cardiometabolic and liver-specific clusters, we performed differential gene expression analysis in the liver in a subset of the ABOS cohort participants, including 97 individuals from the cardiometabolic cluster, 63 from the liver-specific cluster and 671 from the control cluster. The comparison of the cardiometabolic and the liver-specific clusters showed upregulation of genes involved in cholesterol metabolism and biosynthesis (for example, HMGCS1 , MVD , CYP51A1 , LSS , SC5D and LDLR ) and glycolysis (for example, ALDOC ) in the cardiometabolic cluster (Fig. and Supplementary Table ), which were identified as enriched pathways also by Gene Ontology biological processes (GO-BP) analysis, together with alcohol metabolic processes (Extended Data Fig. ). The chitinase 3-like 1 ( CHI3L1 ) gene, linked to liver fibrogenesis , was the most highly differentially expressed, possibly reflecting a slightly higher albeit not significantly different fibrosis stage in the individuals in this cluster as well as an older age (Table ). Similar results were obtained when comparing the cardiometabolic and the control clusters, confirming the upregulation of genes involved in cholesterol metabolism and synthesis in the cardiometabolic cluster (Extended Data Fig. ), mirroring the higher metabolic dysfunction, type 2 diabetes and cardiovascular risk observed in this cluster. When comparing the liver-specific and the control clusters, we observed upregulation of genes involved in lipid droplet homeostasis and intrahepatic lipid transport, including FABP4 and FABP5 , in the liver-specific cluster. This cluster also showed upregulation of genes implicated in inflammation, including CXCL9 and SPP1 , and liver carcinogenesis, including ANXA2P1 and HULC (Extended Data Fig. and Supplementary Table ). GO-BP analysis confirmed these results, showing an upregulation of lipid localization, immunoregulatory, inflammatory and wound healing processes and mirroring the elevated liver enzymes observed in this cluster as well as a higher risk of progressive liver disease in UK Biobank (Extended Data Fig. ). Differential metabolomic analysis across clusters To further elucidate biological differences between the cardiometabolic and liver-specific clusters, we analyzed the metabolomics data available in ABOS (Fig. ). When comparing the cardiometabolic and liver-specific clusters, we observed increased concentrations of carbohydrates in the cardiometabolic cluster (Extended Data Fig. ), reflecting the dysglycemic state (Table ). However, most differences concerned amino acid and lipid metabolites, and particularly the amino acid metabolites tyramine O -sulfate, homocitrulline, p -cresol glucuronide, phenylacetylglutamine, phenylacetylglutamate, 4-hydroxyphenylacetylglutamine, 4-hydroxyphenylacetate and imidazole propionate, previously associated with the gut microbiota – , had the highest and most significant increase in the cardiometabolic cluster. Deoxycholate, a secondary bile acid, was also elevated, suggesting changes in lipid metabolism and liver function. These metabolites were also differentially abundant between the cardiometabolic and control clusters (Extended Data Fig. and Supplementary Table ) and, therefore, probably linked to the dysmetabolic state. Differences were also observed in the comparison between the liver-specific and control clusters, with elevated levels of 5α-androstan-3α,17β-diol monosulfate, its disulfate form, glycoursodeoxycholic acid sulfate, and taurochenodeoxycholic acid 3-sulfate suggesting changes in steroid processing. Furthermore, higher levels of ursodeoxycholate, glycochenodeoxycholate glucuronide and glycochenodeoxycholate 3-sulfate and decreased levels of cysteine-glutathione disulfide were observed in both the liver-specific and cardiometabolic clusters compared with the control cluster (Extended Data Fig. and Supplementary Table ). Possibly linked to oxidative stress and liver function, we observed decreased levels of cysteine-glutathione disulfide both in the liver-specific and in the cardiometabolic cluster compared with the control cluster, thus indicating that reduced antioxidant capacity might be a common feature in the two MASH subtypes or a consequence of the severe phenotype. Taken together, these transcriptomics and metabolomics analyses support the existence of two biologically distinct types of severe MASLD. Molecular features of the cardiometabolic cluster versus dysglycemia Since a majority of individuals in the cardiometabolic cluster have type 2 diabetes, we also investigated if the molecular features of that cluster differ from those merely associated with dysglycemia. For that purpose, we analyzed liver gene transcripts and metabolites that were differentially abundant between the cardiometabolic cluster versus the control cluster, as compared with those that were differentially abundant between individuals with type 2 diabetes versus nondiabetic controls. We found that the cardiometabolic cluster differentially exhibited a set of 199 unique liver transcripts that were not overexpressed in the type 2 diabetes group, indicating a distinctive transcriptional signature corresponding to 58 pathways expressed in the cardiometabolic cluster but not present in the type 2 diabetes group. Specifically, the cardiometabolic cluster shows distinct molecular pathways that involve unique aspects of lipid transport and metabolism, immune response modulation, oxidative stress and extracellular matrix remodeling, suggesting a heightened state of metabolic activity and cellular defense, as well as active involvement in managing inflammation (Supplementary Table ). Regarding metabolites, our analyses also revealed a significant overlap between type 2 diabetes and cardiometabolic cluster, with 151 metabolites that were differentially abundant in both subgroups, many being directly linked to dysglycemia, such as monosaccharides and disaccharides (for example, glucose and sucrose). However, we identified a distinctive subset of 88 metabolites unique to the cardiometabolic cluster. These ‘cardiometabolic-specific’ metabolites include glycerophospholipids, sphingolipids, amino acid derivatives, protein metabolism and metabolites of bile acids unveiling a metabolic signature particular to this cluster at risk for MASH. These metabolites highlight disturbances in lipid processing, protein and energy metabolism, inflammatory profile and potential gut microbiome interactions that are not present in the type 2 diabetes profile (Supplementary Table ).
Cluster analysis and identification of MASLD types were performed on the basis of the data of 1,389 French participants from the Atlas Biologique de l’Obésité Sévère (ABOS) cohort (Extended Data Fig. ). Overall, we identified six clusters with distinctive patterns of the six clustering variables in the ABOS cohort (Fig. ). We then added patients from three independent cohorts to these clusters, namely, the Universitair Ziekenhuis Antwerpen (UZA) cohort from Belgium ( n = 463), the Molecular Architecture of FAtty Liver Disease in individuals with obesity undergoing bAriatric surgery (MAFALDA) cohort from Italy ( n = 261) and the Helsinki cohort from Finland ( n = 375) (Extended Data Fig. ). Due to the low number of participants in some individual clusters across cohorts, we pooled the three cohorts for the following analyses, resulting in a consolidated cohort of 1,099 individuals, referred to hereafter as the validation cohort (Fig. ). In the ABOS cohort, cluster 1 contained 18% of participants and was characterized by older age and hypertension; cluster 2 included 11% of participants and had the highest hemoglobin A1c (HbA1c), high triglycerides and hypertension; cluster 3 had 13% of participants, young age and the highest body mass index (BMI); cluster 4 had 26% of participants and the highest low-density lipoprotein (LDL) cholesterol levels; cluster 5 had 7% of participants and the highest alanine aminotransferase (ALT) levels; and cluster 6 had 24% of participants and a majority of females with a more favorable metabolic profile (Fig. and Extended Data Table ). Despite marked differences in age and prevalence of type 2 diabetes between clusters 2 and 5, liver histology revealed high prevalence of MASH and advanced fibrosis ( F ≥ 3) in these two subgroups, as compared with other clusters combined: 33.6% and 24.2% versus 5.0%, and 21.8% and 15.8% versus 3.4%, respectively (all adjusted P < 0.001 versus other clusters combined). To further examine the potential differences in mechanisms driving MASH, we pooled the clusters with lower severity of MASLD (clusters 1, 3, 4 and 6) in a ‘control’ cluster, which was compared with cluster 2 and cluster 5 (Fig. and Table ). To replicate these findings, we then assigned the participants of the three validation cohorts with liver histology (UZA, MAFALDA and Helsinki) to the same subgroups, based on which cluster they were most similar to. Results showed similar distributions of clusters across the three cohorts (Figs. and , and Extended Data Fig. ). Like in the ABOS cohort, the potential cardiometabolic cluster (cluster 2), characterized by the highest HbA1c, hypertension and dyslipidemia, and the liver-specific cluster (cluster 5), characterized by the highest ALT, were similarly enriched in participants presenting more severe histological features of MASLD, including MASH and liver fibrosis. We further confirmed the association of the cardiometabolic and liver-specific clusters with at-risk liver phenotype in a subset of the UK Biobank participants ( n = 6,792) who underwent liver magnetic resonance imaging (MRI). Consistent with what was observed with histology in the ABOS cohort, the cardiometabolic and liver-specific clusters in the UK Biobank were similarly enriched in participants presenting typical features of hepatic steatosis (proton density fat fraction (PDFF) >5.5%) and MASH (PDFF >5.5% and iron-corrected T1 (cT1) >800 ms) (Fig. and Extended Data Table ).
MASLD has a strong genetic component with variants in PNPLA3 , TM6SF2 , MBOAT7 and GCKR accounting for a large fraction of its heritability and accelerating liver disease progression to MASH, cirrhosis and hepatocellular carcinoma – . We hypothesized that the liver-specific cluster could be enriched in these genetic variants. Therefore, we examined the difference of polygenic risk score of hepatic fat content (PRS-HFC) distribution in the liver-specific cluster 5 compared with the cardiometabolic and control clusters in ABOS, finding an enrichment of PRS-HFC in this cluster (adjusted P = 0.034 and adjusted P < 0.001 versus the cardiometabolic and control clusters, respectively) (Table ). Results were similar when we considered only the PNPLA3 rs738409 variant ( P < 0.01 and P < 0.001 versus the cardiometabolic and control clusters, respectively) (Fig. ). These results were confirmed in UK Biobank participants (Extended Data Table ).
In the UK Biobank, individuals allocated in the six clusters exhibited similar characteristics to those observed in the ABOS cohort (Extended Data Table and Extended Data Fig. ). During a median (interquartile range) follow-up of 13.4 (12.6–14.1) years, there were 2,676 (1.12%) individuals who developed chronic liver disease, with the liver-specific and cardiometabolic clusters being the ones with the highest cumulative incidence (both P < 0.001 versus control cluster) (Fig. and Extended Data Table ). Following adjustment for age, sex and alcohol intake, the liver-specific and cardiometabolic clusters had a more than fourfold increased risk of chronic liver disease compared with the control cluster (adjusted hazard ratio (HR) 4.52, 95% confidence interval (CI) 3.88–5.26, P < 0.001, and adjusted HR 4.04, 95% CI 3.50–4.66, P < 0.001, respectively) (Fig. ). During a median (interquartile range) follow-up of 13.4 (12.7–14.1) years, there were 20,721 (10.59%) individuals who developed cardiovascular disease, with the cardiometabolic cluster being the one with the highest cumulative incidence: 21.88% in the cardiometabolic cluster versus 10.37% in the control cluster (HR 2.31, 95% CI 2.16–2.47; P < 0.001 versus control), and 9.52% in the liver-specific cluster (HR 0.91, 95% CI 0.82–1.00; P = 0.054 versus control) (Fig. and Extended Data Table ). When the analysis was adjusted for age, sex and alcohol intake, the cardiometabolic cluster had a significantly increased risk of experiencing cardiovascular disease compared with the control cluster (adjusted HR 1.80, 95% CI 1.68–1.93; P < 0.001), which was also significantly higher than the increase in risk of the liver-specific cluster compared with the control cluster (adjusted HR 1.18, 95% CI 1.07–1.31; P = 0.001) (Fig. ). During a median (interquartile range) follow-up of 13.3 (12.6–14.1) years, there were 8,563 (4.35%) individuals who developed type 2 diabetes, with the cardiometabolic cluster being the one with the highest cumulative incidence ( P < 0.001 versus both liver-specific and control clusters) (Fig. and Extended Data Table ). Following adjustment for age, sex and alcohol intake, the cardiometabolic cluster had a nearly sevenfold increased risk of developing type 2 diabetes compared with the control cluster (adjusted HR 6.82, 95% CI 6.01–7.73; P < 0.001), which was higher than the increase in risk of the liver-specific cluster compared with the control cluster (adjusted HR 2.91, 95% CI 2.62–3.23; P < 0.001) (Fig. ). Of note, a majority of participants from the cardiometabolic cluster also presented with type 2 diabetes, which may explain the higher risk of cardiovascular disease observed in this cluster. Likewise, the mean HbA1c level remained superior in the cardiometabolic cluster after excluding patients with preexisting type 2 diabetes for analyzing incident diabetes (Extended Data Table ). However, adjusting for HbA1c did not fully remove the association of the cardiometabolic cluster with type 2 diabetes risk. Sensitivity analyses excluding individuals with BMI <27 kg m −2 or those with excessive alcohol consumption (>50/60 g per day for women/men) showed similar results to the main analysis (Extended Data Table ). In summary, the cardiometabolic cluster had a higher risk of developing cardiovascular disease and type 2 diabetes, and a similar risk of developing chronic liver disease, as compared with the liver-specific cluster.
We then explored the added value of the proposed clustering, beyond each of its individual components, to predict the various clinical outcomes. For that purpose, for each outcome, we first examined the overall predictive power of each variable of interest compared with clustering alone. No individual variable performed better than clustering at predicting simultaneously the three clinical outcomes (Extended Data Table ). For example, ALT alone predicted incident chronic liver disease better than clustering, but clustering was superior at predicting cardiovascular disease. In contrast, HbA1c predicted incident cardiovascular disease better than clustering, but clustering performed better in the prediction of chronic liver disease. Likewise, among patients without diabetes at the time of inclusion, age, BMI, HbA1c, ALT and triglycerides performed better in predicting the risk of incident diabetes better than clustering alone. In contrast, clustering did better than LDL cholesterol alone at predicting all outcomes. Second, we performed multivariable analyses, in which the clustering model was first adjusted for sex, age and alcohol use, and second, one by one, ALT, HbA1c, triglycerides, BMI or LDL cholesterol (Fig. ). Although in most cases the HR estimates of at-risk clusters were reduced after further adjustment for one other clustering variable, all values remained statistically significant compared with the control cluster in at least one at-risk cluster for each outcome. Collectively, these data show that clustering was superior to each individual variable in predicting simultaneously all three clinical trajectories.
To gain insights into the biological differences between the cardiometabolic and liver-specific clusters, we performed differential gene expression analysis in the liver in a subset of the ABOS cohort participants, including 97 individuals from the cardiometabolic cluster, 63 from the liver-specific cluster and 671 from the control cluster. The comparison of the cardiometabolic and the liver-specific clusters showed upregulation of genes involved in cholesterol metabolism and biosynthesis (for example, HMGCS1 , MVD , CYP51A1 , LSS , SC5D and LDLR ) and glycolysis (for example, ALDOC ) in the cardiometabolic cluster (Fig. and Supplementary Table ), which were identified as enriched pathways also by Gene Ontology biological processes (GO-BP) analysis, together with alcohol metabolic processes (Extended Data Fig. ). The chitinase 3-like 1 ( CHI3L1 ) gene, linked to liver fibrogenesis , was the most highly differentially expressed, possibly reflecting a slightly higher albeit not significantly different fibrosis stage in the individuals in this cluster as well as an older age (Table ). Similar results were obtained when comparing the cardiometabolic and the control clusters, confirming the upregulation of genes involved in cholesterol metabolism and synthesis in the cardiometabolic cluster (Extended Data Fig. ), mirroring the higher metabolic dysfunction, type 2 diabetes and cardiovascular risk observed in this cluster. When comparing the liver-specific and the control clusters, we observed upregulation of genes involved in lipid droplet homeostasis and intrahepatic lipid transport, including FABP4 and FABP5 , in the liver-specific cluster. This cluster also showed upregulation of genes implicated in inflammation, including CXCL9 and SPP1 , and liver carcinogenesis, including ANXA2P1 and HULC (Extended Data Fig. and Supplementary Table ). GO-BP analysis confirmed these results, showing an upregulation of lipid localization, immunoregulatory, inflammatory and wound healing processes and mirroring the elevated liver enzymes observed in this cluster as well as a higher risk of progressive liver disease in UK Biobank (Extended Data Fig. ).
To further elucidate biological differences between the cardiometabolic and liver-specific clusters, we analyzed the metabolomics data available in ABOS (Fig. ). When comparing the cardiometabolic and liver-specific clusters, we observed increased concentrations of carbohydrates in the cardiometabolic cluster (Extended Data Fig. ), reflecting the dysglycemic state (Table ). However, most differences concerned amino acid and lipid metabolites, and particularly the amino acid metabolites tyramine O -sulfate, homocitrulline, p -cresol glucuronide, phenylacetylglutamine, phenylacetylglutamate, 4-hydroxyphenylacetylglutamine, 4-hydroxyphenylacetate and imidazole propionate, previously associated with the gut microbiota – , had the highest and most significant increase in the cardiometabolic cluster. Deoxycholate, a secondary bile acid, was also elevated, suggesting changes in lipid metabolism and liver function. These metabolites were also differentially abundant between the cardiometabolic and control clusters (Extended Data Fig. and Supplementary Table ) and, therefore, probably linked to the dysmetabolic state. Differences were also observed in the comparison between the liver-specific and control clusters, with elevated levels of 5α-androstan-3α,17β-diol monosulfate, its disulfate form, glycoursodeoxycholic acid sulfate, and taurochenodeoxycholic acid 3-sulfate suggesting changes in steroid processing. Furthermore, higher levels of ursodeoxycholate, glycochenodeoxycholate glucuronide and glycochenodeoxycholate 3-sulfate and decreased levels of cysteine-glutathione disulfide were observed in both the liver-specific and cardiometabolic clusters compared with the control cluster (Extended Data Fig. and Supplementary Table ). Possibly linked to oxidative stress and liver function, we observed decreased levels of cysteine-glutathione disulfide both in the liver-specific and in the cardiometabolic cluster compared with the control cluster, thus indicating that reduced antioxidant capacity might be a common feature in the two MASH subtypes or a consequence of the severe phenotype. Taken together, these transcriptomics and metabolomics analyses support the existence of two biologically distinct types of severe MASLD.
Since a majority of individuals in the cardiometabolic cluster have type 2 diabetes, we also investigated if the molecular features of that cluster differ from those merely associated with dysglycemia. For that purpose, we analyzed liver gene transcripts and metabolites that were differentially abundant between the cardiometabolic cluster versus the control cluster, as compared with those that were differentially abundant between individuals with type 2 diabetes versus nondiabetic controls. We found that the cardiometabolic cluster differentially exhibited a set of 199 unique liver transcripts that were not overexpressed in the type 2 diabetes group, indicating a distinctive transcriptional signature corresponding to 58 pathways expressed in the cardiometabolic cluster but not present in the type 2 diabetes group. Specifically, the cardiometabolic cluster shows distinct molecular pathways that involve unique aspects of lipid transport and metabolism, immune response modulation, oxidative stress and extracellular matrix remodeling, suggesting a heightened state of metabolic activity and cellular defense, as well as active involvement in managing inflammation (Supplementary Table ). Regarding metabolites, our analyses also revealed a significant overlap between type 2 diabetes and cardiometabolic cluster, with 151 metabolites that were differentially abundant in both subgroups, many being directly linked to dysglycemia, such as monosaccharides and disaccharides (for example, glucose and sucrose). However, we identified a distinctive subset of 88 metabolites unique to the cardiometabolic cluster. These ‘cardiometabolic-specific’ metabolites include glycerophospholipids, sphingolipids, amino acid derivatives, protein metabolism and metabolites of bile acids unveiling a metabolic signature particular to this cluster at risk for MASH. These metabolites highlight disturbances in lipid processing, protein and energy metabolism, inflammatory profile and potential gut microbiome interactions that are not present in the type 2 diabetes profile (Supplementary Table ).
In the present study, using unsupervised hard clustering, we identified two distinct endotypes of at-risk MASLD, namely, cardiometabolic MASLD and liver-specific MASLD. Both types were characterized by a severe liver phenotype at baseline; however, they showed different underlying biological profiles and distinct clinical progression patterns. These two newly defined types of MASLD could be robustly identified in several independent and well-characterized cohorts, using a simple algorithm based on six widely available traits: age, BMI, HbA1c, ALT, LDL cholesterol and triglycerides ( https://ulr-metrics.univ-lille.fr/masldclusters/ ). The two types of at-risk MASLD could not be distinguished by their liver phenotype assessed by histology nor by MRI, and they were both associated with an increased risk of incident chronic liver disease. The cardiometabolic MASLD was, however, specifically characterized by a higher prevalence of dyslipidemia, hypertension and dysglycemia, resulting in a high risk of incident cardiovascular disease and type 2 diabetes. In contrast, the liver-specific MASLD was characterized by a more pronounced elevation of liver enzymes at a younger age and showed limited risk of diabetes progression and incident cardiovascular disease. The liver-specific MASLD was also characterized by a specific genetic background with a higher frequency of the minor allele of PNPLA3 rs738409 and a higher polygenic risk score for hepatic fat content. Importantly, the proposed clustering outperformed its individual components in simultaneously predicting liver phenotype and future risk of the different clinical outcomes. As expected, several individual continuous variables also showed a good predictive value for predicting specific clinical outcomes in the overall UK Biobank population, namely, ALT for chronic liver disease and HbA1c for cardiovascular disease and incident diabetes. In contrast, the clustering approach surpassed all individual variables for simultaneously predicting the three outcomes. Of note, after adjustment for ALT in multivariable analysis, the risk of chronic liver disease became lower in the liver-specific cluster than in the control cluster, while it remained increased in the cardiometabolic cluster. Confirming the strong association between the risk of liver disease and ALT in the liver-specific cluster, this result also indicates that ALT may overestimate the risk of chronic liver disease when other clustering variables are not considered. Similarly, the positive association between the cardiometabolic cluster and cardiovascular risk became negative after adjustment for HbA1c, suggesting that HbA1c alone may overestimate the risk of cardiovascular disease, in which other clustering variables such as triglycerides or age may favor cardiovascular disease, independently of dysglycemia. Finally, in the liver-specific cluster, the elevated risk of incident diabetes was eliminated after adjustment for ALT, underlying the specific role played by the liver in the physiopathology of dysglycemia . Taken together, our findings highlight the potential of clustering to provide a more comprehensive risk assessment, identifying patients at risk for a range of liver and cardiometabolic diseases rather than focusing on a single condition. In addition, the resulting assignment of individuals into two clearly labeled clusters of at risk MASLD facilitated the exploration of their biological nature. Specifically, the cardiometabolic cluster exhibited unique liver gene transcripts and pathways not present in type 2 diabetes, involving lipid transport, immune response and inflammation and vascular function-related pathways. In addition, metabolomic analyses identified numerous metabolites common to both type 2 diabetes and the cardiometabolic cluster, mostly linked to dysglycemia but also some metabolites uniquely associated with the cardiometabolic cluster. These unique metabolites, including glycerophospholipids, sphingolipids and bile acid metabolites, indicate specific disturbances in lipid processing, protein and energy metabolism, and inflammation. The cardiometabolic cluster was also characterized by an increase of several gut microbiota metabolites previously linked to insulin resistance and diabetes pathogenesis, such as imidazole propionate, p -cresol glucuronide, phenylacetylglutamine, 4-hydroxyphenylacetylglutamine and phenylacetylglutamate – . Similarly, higher levels of p -cresol glucuronide and 4-hydroxyphenylacetylglutamine have been linked to cardiovascular toxicity and mortality , , . These metabolites, which are produced by the gut microbiota from aromatic amino acids, might explain at least in part the increased cardiovascular risk observed in this cluster. In contrast, the liver-specific MASLD was more related to changes in lipid metabolism confined to the hepatocyte, in line with its specific genetic background. In this study we identify distinctive endotypes of at-risk MASLD with a similar baseline liver phenotype, but different biological mechanisms, ultimately resulting in distinct clinical trajectories. Two studies have previously employed data-driven clustering in MASLD , . However, none of these studies examined liver histology across proposed clusters, assessed the risk of liver-related outcomes nor explored the underlying molecular biology. Overall, our results demonstrate the heterogeneity of MASLD and underscore the distinct pathophysiological profile of the newly identified clusters, highlighting the need for more targeted therapeutic approaches. Likewise, the thyroid hormone receptor agonist Resmetirom, recently approved for the treatment of MASH, was found ineffective in a large fraction of individuals, potentially due to disease heterogeneity . According to the present study, liver-specific MASLD, characterized by abnormal lipid droplet homeostasis and intrahepatic lipid transport genes, may respond more favorably to this drug that specifically reduces hepatic lipid content and inflammation. In contrast, cardiometabolic MASLD may respond better to drugs regulating lipid and glucose metabolism such as the fibroblast growth factor 21 analog pegozafermin and the pan-peroxisome proliferator-activated receptor agonist lanifibranor , or to drugs favoring weight loss and cardiovascular risk reduction, namely, the glucagon-like peptide-1 (GLP1) receptor agonist semaglutide , the GLP1–glucose-dependent insulinotropic polypeptide receptor dual agonist tirzepatide or the GLP1–glucagon receptor dual agonist survodutide . Taken together with existing evidence, the newly proposed stratification could help refine emerging therapeutic strategies based on specific molecular pathomechanisms underlying each MASLD endotype. These findings align with partitioned polygenic risk score analyses based on genetic associations with MASLD, including intrahepatic lipoprotein retention, which identify two distinct subtypes: one primarily liver-confined with more aggressive liver disease and another systemic with a higher risk of cardiometabolic disease . Some limitations of our study must be acknowledged. First, unsupervised clustering largely depends on the traits used in the analysis. We therefore selected six biomarkers embedded in the pathological mechanisms of MASLD, with high biological plausibility. It is noteworthy that we focused the present analysis on the two clusters associated with at-risk MASLD. The other clusters may, however, also represent distinct and potentially clinically relevant subgroups of MASLD, warranting further exploration in future studies. Second, the absence of lean or overweight individuals in the validation cohort could limit the generalizability of the proposed stratification across the full spectrum of steatotic liver disease. Moreover, ABOS participants were not screened on the basis of additional clinical or biochemical markers, unlike most studies where biopsies are performed only on at-risk individuals. Of note, the robustness of the new stratification was confirmed in independent cohorts with a higher incidence of MASH or more diverse BMI categories. In addition, an independent parallel study based on partitioned polygenic risk score associated with MASLD identified two similar subtypes: one primarily liver-confined with more aggressive liver disease and another systemic with a higher risk of cardiometabolic disease . Another debatable aspect of the present study is the use of hard clustering, which assigns each patient to a single cluster. While this method facilitates the interpretation, it also ignores uncertainties within clusters, particularly for individuals at cluster boundaries. Alternative statistical approaches that provide probabilities for cluster membership, for example, model-based clustering , could capture within-cluster differences more effectively and influence the clinical decision. Reversed graph embedding approaches such as discriminative dimensionality reduction via learning a tree (DDRTree) could also offer a more nuanced understanding of patient profiles . Finally, all the study cohorts comprised primarily Europeans, and our findings remain to be confirmed in other ethnic groups, with other genetic backgrounds. In conclusion, this study unveiled the existence of at least two distinct types of at-risk MASLD, displaying a similar liver phenotype at baseline, but different biological mechanisms and specific outcomes, ultimately resulting in distinct clinical trajectories, with regard to cardiovascular disease and diabetes. Therefore, it is reasonable to state that the search for drug treatment should reflect and selectively target these different biological pathways. Future prospective studies are needed to assess the clinical value of these two MASLD types for guiding prevention and treatment.
Study cohorts ABOS cohort ABOS is a prospective study ( NCT01129297 ) aiming to identify the key factors influencing the outcomes of bariatric surgery. A total of 1,545 participants enrolled between 2006 and 2021 at the Lille University Hospital, Lille, France, were included in the present analysis. All individuals provided written informed consent before inclusion. Ethical approval for the study was granted by the Comité de Protection des Personnes Nord Ouest VI (Lille, France). Demographic characteristics, anthropomorphic measurements, medical history, concomitant medication and laboratory tests were collected before surgery as previously described – . A 75 g oral glucose tolerance test was performed after overnight fasting at baseline and 1 year after surgery. Type 2 diabetes status was defined at baseline on the basis of a previous history of diabetes, use of antidiabetic medications, fasting plasma glucose ≥126 mg dl −1 (7.0 mmol l −1 ) and/or 2 h plasma glucose ≥200 mg dl −1 (11.1 mmol l −1 ) during oral glucose tolerance test, and/or HbA1c ≥6.5% (48 mmol l −1 ) . Liver histology was obtained at baseline through a percutaneous liver needle biopsy performed during surgery as previously described – . All liver biopsies were analyzed at Lille University Hospital by two expert liver pathologists, according to the NASH Clinical Research Network (NASH CRN) scoring system, as previously described , . Briefly, pathologists were blinded to the patient’s clinical and biological data. The reports were drawn up using a standardized template adapted to the recommendations of the NASH CRN group. All biopsies obtained before 2011 were reanalyzed and adapted to NASH CRN recommendations. Liver biopsies from patients with ‘borderline NASH’ histology, or with borderline size or length, were reanalyzed by two expert pathologists. The diagnosis of MASH was made by pathologists in the simultaneous presence of steatosis, inflammation and ballooning. Disease activity was subsequently graded with the nonalcoholic fatty liver disease activity score (NAS) according to specific histological features, as the unweighted sum of the scores for steatosis (0–3), lobular inflammation (0–3) and ballooning (0–2) ranging from 0 to 8. Liver fibrosis was scored from F0 to F4 (ref. ). UZA cohort The UZA cohort included 467 patients referred to the Obesity Clinic at Antwerp University Hospital, Edegem, Belgium, for suspected MASLD based on imaging and biochemistry data. The collection of clinical, anthropometric and histological data has been previously described , . A percutaneous or laparoscopic-guided percutaneous liver needle biopsy was performed on participants with overweight/obesity as part of the Hepatic and Adipose Tissue and Functions in Metabolic Syndrome (HEPADIP) study (Belgian registration number B30020071389, Antwerp University Hospital File 6/25/125) as previously described . Liver histology was assessed according to the NASH CRN , . Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. Written informed consent was obtained from all patients in both cohorts, and the studies were conducted in conformity with the Declaration of Helsinki. MAFALDA cohort A total of 264 participants with liver biopsy data from the MAFALDA cohort were included in the analyses . Briefly, consecutive individuals with morbid obesity eligible for bariatric surgery were recruited from May 2020 to June 2021 at Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy. Preoperative clinical and laboratory data were collected using standardized procedures. An intraoperative liver biopsy was obtained. Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The MAFALDA study has been approved by the Local Research Ethics Committee (no. 16/20), and it was conducted in accordance with the principles of the Declaration of Helsinki. All participants gave written informed consent to the study. Helsinki cohort The Helsinki cohort enrolled 343 consecutive individuals with morbid obesity eligible for bariatric surgery and 42 consecutive individuals with a BMI ≥25 kg m −2 undergoing liver biopsy for suspected MASH, all recruited between 2006 and 2018 at the Helsinki University Hospital, Helsinki, Finland. A week before the liver biopsy, participants underwent clinical examination and blood sampling as previously described . Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The study was approved by the Local Research Ethics Committee at Helsinki University Hospital. All participants gave written informed consent to the study. UK Biobank cohort The UK Biobank is a large prospective cohort study recruiting approximately 500,000 participants (age 40–69 years) between 2006 and 2010 throughout the United Kingdom . Clinical and laboratory data were collected using highly standardized procedures. Medical diagnoses were obtained through linkage of hospital admissions, death and cancer registers from the National Health Service records (data fields 41270, 40001, 40002 and 40006). The UK Biobank study has been approved by the NorthWest Multicenter Research Ethics Committee (no. 21/NW/0157). All participants gave written informed consent to the study. Data used in this study were obtained under application number 37142. In the current study, we selected unrelated UK Biobank participants of European ancestry on the basis of our quality control pipeline, which has been described in detail previously , , , and we included individuals with BMI ≥25 kg m −2 and/or with type 2 diabetes as defined elsewhere . Participants were scanned at the UK Biobank Imaging Centre in Cheadle (United Kingdom) using a Siemens 1.5T MAGNETOM Aera as described in detail elsewhere , . Briefly, a shortened modified look locker inversion (ShMOLLI) was used to quantify liver T1, and a multi-echo-spoiled gradient echo was used to quantify liver iron and fat. Data were analyzed using LiverMultiScan Discover 4.0 software. Hepatic steatosis was defined by PDFF >5.5%) (ref. ), MASH by PDFF >5.5% and iron-corrected T1 mapping (cT1) by >800 ms (refs. , ). Cluster analysis Six variables associated with MASLD physiopathology and increased risk of MASH were selected for clustering in ABOS, namely, age, BMI, HbA1c, ALT, LDL cholesterol and circulating triglycerides. Cluster analysis and identification of MASLD subtypes were performed on 1,389 ABOS participants (Fig. ), after the exclusion of 54 patients for self-declaration alcohol consumption above 50/60 g per day for women and men, respectively, at the first visit, to avoid any risk of inclusion of patients with alcohol-related liver disease; 58 participants for a BMI ≤30 kg m −2 ; 27 participants for missing values in clustering traits (that is, age, BMI, HbA1c, ALT, LDL cholesterol and circulating triglycerides); and 17 participants having absolute standardized values of 5 or higher in at least one of the clustering traits (Extended Data Fig. ). The analysis was performed using the partitioning around medoids method in R (package ‘cluster’, version 2.1.4) , which is a more robust version of k -means clustering. Distances were computed as Euclidean distances using standardized variables scaled to a mean of 0 and a standard deviation of 1. To estimate the optimal number of clusters, we evaluated the silhouette widths for each clustering, varying the number of clusters going from three clusters to ten clusters. We determined the optimal number of clusters by choosing the configuration that yielded the highest silhouette coefficients, signifying well-delineated clusters whose members are closely related to one another and distinctly separate from individuals in other clusters. We then assessed the stability of the resulting clusters using the R function clusterboot from the fpc package (v.2.2-12), by resampling 2,000 times the original data and computing the Jaccard similarities of the original clusters to the most similar clusters in the resampled data. The mean (standard deviation) Jaccard-similarity measure was 0.73 (0.07) across all clusters. Data from the UZA, MAFALDA and Helsinki cohorts were normalized using ABOS values for centering and scaling. Then, participants were allocated to the cluster they were most similar to after the exclusion of participants having absolute standardized values of 5 or higher in at least one of the clustering traits, calculated as their Euclidean distance from the nearest cluster medoid derived from ABOS coordinates. Data from the UK biobank cohorts were normalized using ABOS values for centering and scaling. Participants were allocated to the cluster they were most similar to after the exclusion of those with self-reported history or medical diagnosis of other causes of liver disease, with a medical diagnosis of the target longitudinal outcome at baseline, or having absolute standardized values of 5 or higher in at least one of the clustering traits, calculated as their Euclidean distance from the nearest cluster medoid derived from ABOS coordinates. The Calinski–Harabasz Index was 263 for the ABOS cohort and reached 174 in the validation cohort, indicating well-defined clusters and confirming the transportability of the proposed stratification in diverse populations. In the UK Biobank cohort, encompassing a broader BMI range and less clinically extreme cases, the Calinski–Harabasz Index increases even further to 18,774, probably due to the larger and more diverse sample size. Visualizing individual risk in relation to their phenotype As a potential aid for assisting clinicians in defining individual profiles of patients with MASLD, we developed an app ( https://ulr-metrics.univ-lille.fr/masldclusters/ ). Genotyping In the ABOS cohort, genotyping was available for 1,259 participants and was performed using the Illumina Infinium assay . This analysis was conducted at the SNO&SEQ Technology Platform, Molecular Medicine, BMC, Husargatan 3, Uppsala, Sweden. Results were analyzed using the software GenomeStudio 2.0.3. The following variants were assessed: PNPLA3 rs738409 C > G (p.I148M), TM6SF2 rs58542926 C > T (p.E167K), MBOAT7 rs641738 C > T and GCKR rs1260326 C > T (p.P446L). In the UK Biobank, genotyping was available for approximately 490,000 individuals and was performed using two similar genotyping arrays (that is Affymetrix UK BiLEVE and UK Biobank Axiom arrays) as described elsewhere . The following variants were assessed: PNPLA3 rs738409 C > G (p.I148M), TM6SF2 rs58542926 C > T (p.E167K), MBOAT7 rs641738 C > T and GCKR rs1260326 C > T (p.P446L). The PRS-HFC was computed according to the originally reported formula . Long-term longitudinal outcomes We analyzed the risk of developing hepatic and extrahepatic outcomes and overall mortality in the UK Biobank cohort. To estimate the incidence of liver outcomes, we selected 213,180 individuals without self-reported history or medical diagnosis of any liver disease (International Classification of Diseases 10th edition (ICD-10) B18, B19, C22.0, E83.0, E83.1, E88.0, I82.0, I85.0, I85.9, K70, K71, K72.1, K72.9, K74.1, K74.2, K74.3, K74.4, K74.5, K74.6, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.6, K76.7, K76.8, K76.9, K83.0, R18 and Z94.4) at baseline and identified those who developed chronic liver disease (ICD-10 C22.0, I85.0, I85.9, K70, K72.1, K72.9, K73, K74.0, K74.1, K74.2, K74.6, K76.0, K76.6, K76.7, K76.8, K76.9 and Z94.4) across the clusters. Participants were excluded from the analyses if they received a medical diagnosis of competing liver diseases (ICD-10 B18, B19, E83.0, E83.1, E88.0, I82.0, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5 and K83.0) before the diagnosis of liver outcome. To estimate the incidence of cardiovascular outcomes, we selected 195,739 individuals without self-reported history or medical diagnosis of chronic viral hepatitis (ICD-10 B18 and B19), other causes of liver disease (ICD-10 E83.0, E83.1, E88.0, I82.0, K70, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.8, K76.9 and K83.0) and cardiovascular disease (ICD-10 I20–I25, I60–I64, I69 and G45) at baseline, and identified those who developed cardiovascular disease across the clusters. To estimate the incidence of type 2 diabetes, we selected 196,791 individuals without self-reported history or medical diagnosis of chronic viral hepatitis (ICD-10 B18 and B19), other causes of liver disease (ICD-10 E83.0, E83.1, E88.0, I82.0, K70, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.8, K76.9 and K83.0) and type 2 diabetes as defined elsewhere at baseline, and identified those who developed type 2 diabetes (ICD-10 E11 and E14) across the clusters. Detailed information about the UK Biobank methods and clinical diagnosis is provided in Supplementary Table . Liver transcriptomic data generation and normalization Liver transcriptomic data were available for a subset of 831 participants from the ABOS cohort, as previously described . Total RNA was extracted from 30 mg frozen liver biopsies for Affymetrix microarray analysis using TRIzol reagent (Thermo Fisher Scientific), followed by purification on RNeasy columns (Qiagen). RNA purity and quantity were assessed using a Nanodrop spectrometer (Thermo Fisher Scientific). RNA integrity was quantified using the Agilent RNA6000 Nano assay and an Agilent 2100 BioAnalyzer. Raw data from Affymetrix microarrays were first processed with robust multi-array average (RMA) with GC correction and scale intensities (CG-RMA-scale) as a normalization method. Metabolomic data generation and normalization In the ABOS cohort, nontargeted global metabolomic analysis was performed on plasma samples in 1,322 participants by Metabolon, using two independent platforms: ultrahigh performance liquid chromatography/tandem mass spectrometry optimized for basic species or acidic species, and gas chromatography–mass spectrometry. Raw data for metabolomics were transformed using log transformation and imputation with minimum observed values for each compound. Statistical analysis Data were reported as median (interquartile range) for continuous variables and frequencies (percentages) for categorical variables. Clusters were compared using the Kruskal–Wallis test, chi-squared test or Fisher’s exact test, as appropriate. Raw P values were adjusted for multiple testing separately for clinical data, histological data and genetic data. To control the family-wise error rate, the Bonferroni method was used. Differences were considered statistically significant when adjusted P value(s) were less than 0.05. For statistically significant variables, post hoc analysis was performed comparing pairwise MASH-enriched MASLD clusters (2 and 5) and the combined nonenriched MASLD clusters (1, 3, 4 and 6) using the Dunn test, chi-squared test or Fisher’s exact test, as appropriate, with Bonferroni adjustment. Differential analysis of liver transcriptomic across the clusters was performed using moderated t -tests from the R Bioconductor package Limma v.3.60.4. The same methodology was also applied to metabolomic after exclusion of xeniobiotics. Differences were considered statistically significant when P value(s) adjusted for multiple comparisons using the Benjamini–Hochberg correction (to control the false discovery rate) were less than 0.05 and the absolute value of log 2 fold change was greater than 0.26. Group comparisons for genes were represented using volcano plots. The number of differentially expressed genes between the various clusters were reported through Euler diagrams. Pathway enrichment on the transcriptome was performed with the R package ClusterProfiler (v.4.7.1), based on GO-BP pathways. The GSEA method was run with the absolute value of the moderated t -test statistic as ranking metric. The P values of enriched pathways were adjusted using the Benjamini–Hochberg procedure, and an adjusted P value <0.05 was considered significant. In the UK Biobank, clusters were compared using analysis of variance, Kruskal–Wallis test, chi-square test or Fisher’s test as appropriate, adjusted for multiple testing separately for clinical data and genetic data, using the Bonferroni method. Similarly, post hoc comparisons were carried out with Bonferroni correction. The incidence of chronic liver disease, cardiovascular disease and type 2 diabetes were defined as the composite occurrence of the clinical event or event-related death during follow-up. Then, the cumulative incidence of the clinical outcomes was computed according to the Aalen–Johansen method for chronic liver disease, cardiovascular disease and type 2 diabetes, taking into account the competing occurrence of other-cause death, and of selected liver disease (only in the case of chronic liver disease; see above for ICD-10 codes). Cause-specific HRs were calculated through Cox regressions, adjusted for age, sex and alcohol intake. The proportional hazard assumption was verified through the inspection of the Schoenfeld residuals. Sensitivity analyses were performed (1) including only individuals with BMI ≥27 kg m −2 and (2) excluding those with harmful alcohol consumption (>50/60 g per day for women/men). Statistical analyses and graphical representations were performed using R statistical software v.4.4.1 (R Foundation for Statistical Computing, Vienna, Austria). Reporting summary Further information on research design is available in the linked to this article.
ABOS cohort ABOS is a prospective study ( NCT01129297 ) aiming to identify the key factors influencing the outcomes of bariatric surgery. A total of 1,545 participants enrolled between 2006 and 2021 at the Lille University Hospital, Lille, France, were included in the present analysis. All individuals provided written informed consent before inclusion. Ethical approval for the study was granted by the Comité de Protection des Personnes Nord Ouest VI (Lille, France). Demographic characteristics, anthropomorphic measurements, medical history, concomitant medication and laboratory tests were collected before surgery as previously described – . A 75 g oral glucose tolerance test was performed after overnight fasting at baseline and 1 year after surgery. Type 2 diabetes status was defined at baseline on the basis of a previous history of diabetes, use of antidiabetic medications, fasting plasma glucose ≥126 mg dl −1 (7.0 mmol l −1 ) and/or 2 h plasma glucose ≥200 mg dl −1 (11.1 mmol l −1 ) during oral glucose tolerance test, and/or HbA1c ≥6.5% (48 mmol l −1 ) . Liver histology was obtained at baseline through a percutaneous liver needle biopsy performed during surgery as previously described – . All liver biopsies were analyzed at Lille University Hospital by two expert liver pathologists, according to the NASH Clinical Research Network (NASH CRN) scoring system, as previously described , . Briefly, pathologists were blinded to the patient’s clinical and biological data. The reports were drawn up using a standardized template adapted to the recommendations of the NASH CRN group. All biopsies obtained before 2011 were reanalyzed and adapted to NASH CRN recommendations. Liver biopsies from patients with ‘borderline NASH’ histology, or with borderline size or length, were reanalyzed by two expert pathologists. The diagnosis of MASH was made by pathologists in the simultaneous presence of steatosis, inflammation and ballooning. Disease activity was subsequently graded with the nonalcoholic fatty liver disease activity score (NAS) according to specific histological features, as the unweighted sum of the scores for steatosis (0–3), lobular inflammation (0–3) and ballooning (0–2) ranging from 0 to 8. Liver fibrosis was scored from F0 to F4 (ref. ). UZA cohort The UZA cohort included 467 patients referred to the Obesity Clinic at Antwerp University Hospital, Edegem, Belgium, for suspected MASLD based on imaging and biochemistry data. The collection of clinical, anthropometric and histological data has been previously described , . A percutaneous or laparoscopic-guided percutaneous liver needle biopsy was performed on participants with overweight/obesity as part of the Hepatic and Adipose Tissue and Functions in Metabolic Syndrome (HEPADIP) study (Belgian registration number B30020071389, Antwerp University Hospital File 6/25/125) as previously described . Liver histology was assessed according to the NASH CRN , . Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. Written informed consent was obtained from all patients in both cohorts, and the studies were conducted in conformity with the Declaration of Helsinki. MAFALDA cohort A total of 264 participants with liver biopsy data from the MAFALDA cohort were included in the analyses . Briefly, consecutive individuals with morbid obesity eligible for bariatric surgery were recruited from May 2020 to June 2021 at Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy. Preoperative clinical and laboratory data were collected using standardized procedures. An intraoperative liver biopsy was obtained. Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The MAFALDA study has been approved by the Local Research Ethics Committee (no. 16/20), and it was conducted in accordance with the principles of the Declaration of Helsinki. All participants gave written informed consent to the study. Helsinki cohort The Helsinki cohort enrolled 343 consecutive individuals with morbid obesity eligible for bariatric surgery and 42 consecutive individuals with a BMI ≥25 kg m −2 undergoing liver biopsy for suspected MASH, all recruited between 2006 and 2018 at the Helsinki University Hospital, Helsinki, Finland. A week before the liver biopsy, participants underwent clinical examination and blood sampling as previously described . Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The study was approved by the Local Research Ethics Committee at Helsinki University Hospital. All participants gave written informed consent to the study. UK Biobank cohort The UK Biobank is a large prospective cohort study recruiting approximately 500,000 participants (age 40–69 years) between 2006 and 2010 throughout the United Kingdom . Clinical and laboratory data were collected using highly standardized procedures. Medical diagnoses were obtained through linkage of hospital admissions, death and cancer registers from the National Health Service records (data fields 41270, 40001, 40002 and 40006). The UK Biobank study has been approved by the NorthWest Multicenter Research Ethics Committee (no. 21/NW/0157). All participants gave written informed consent to the study. Data used in this study were obtained under application number 37142. In the current study, we selected unrelated UK Biobank participants of European ancestry on the basis of our quality control pipeline, which has been described in detail previously , , , and we included individuals with BMI ≥25 kg m −2 and/or with type 2 diabetes as defined elsewhere . Participants were scanned at the UK Biobank Imaging Centre in Cheadle (United Kingdom) using a Siemens 1.5T MAGNETOM Aera as described in detail elsewhere , . Briefly, a shortened modified look locker inversion (ShMOLLI) was used to quantify liver T1, and a multi-echo-spoiled gradient echo was used to quantify liver iron and fat. Data were analyzed using LiverMultiScan Discover 4.0 software. Hepatic steatosis was defined by PDFF >5.5%) (ref. ), MASH by PDFF >5.5% and iron-corrected T1 mapping (cT1) by >800 ms (refs. , ).
ABOS is a prospective study ( NCT01129297 ) aiming to identify the key factors influencing the outcomes of bariatric surgery. A total of 1,545 participants enrolled between 2006 and 2021 at the Lille University Hospital, Lille, France, were included in the present analysis. All individuals provided written informed consent before inclusion. Ethical approval for the study was granted by the Comité de Protection des Personnes Nord Ouest VI (Lille, France). Demographic characteristics, anthropomorphic measurements, medical history, concomitant medication and laboratory tests were collected before surgery as previously described – . A 75 g oral glucose tolerance test was performed after overnight fasting at baseline and 1 year after surgery. Type 2 diabetes status was defined at baseline on the basis of a previous history of diabetes, use of antidiabetic medications, fasting plasma glucose ≥126 mg dl −1 (7.0 mmol l −1 ) and/or 2 h plasma glucose ≥200 mg dl −1 (11.1 mmol l −1 ) during oral glucose tolerance test, and/or HbA1c ≥6.5% (48 mmol l −1 ) . Liver histology was obtained at baseline through a percutaneous liver needle biopsy performed during surgery as previously described – . All liver biopsies were analyzed at Lille University Hospital by two expert liver pathologists, according to the NASH Clinical Research Network (NASH CRN) scoring system, as previously described , . Briefly, pathologists were blinded to the patient’s clinical and biological data. The reports were drawn up using a standardized template adapted to the recommendations of the NASH CRN group. All biopsies obtained before 2011 were reanalyzed and adapted to NASH CRN recommendations. Liver biopsies from patients with ‘borderline NASH’ histology, or with borderline size or length, were reanalyzed by two expert pathologists. The diagnosis of MASH was made by pathologists in the simultaneous presence of steatosis, inflammation and ballooning. Disease activity was subsequently graded with the nonalcoholic fatty liver disease activity score (NAS) according to specific histological features, as the unweighted sum of the scores for steatosis (0–3), lobular inflammation (0–3) and ballooning (0–2) ranging from 0 to 8. Liver fibrosis was scored from F0 to F4 (ref. ).
The UZA cohort included 467 patients referred to the Obesity Clinic at Antwerp University Hospital, Edegem, Belgium, for suspected MASLD based on imaging and biochemistry data. The collection of clinical, anthropometric and histological data has been previously described , . A percutaneous or laparoscopic-guided percutaneous liver needle biopsy was performed on participants with overweight/obesity as part of the Hepatic and Adipose Tissue and Functions in Metabolic Syndrome (HEPADIP) study (Belgian registration number B30020071389, Antwerp University Hospital File 6/25/125) as previously described . Liver histology was assessed according to the NASH CRN , . Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. Written informed consent was obtained from all patients in both cohorts, and the studies were conducted in conformity with the Declaration of Helsinki.
A total of 264 participants with liver biopsy data from the MAFALDA cohort were included in the analyses . Briefly, consecutive individuals with morbid obesity eligible for bariatric surgery were recruited from May 2020 to June 2021 at Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy. Preoperative clinical and laboratory data were collected using standardized procedures. An intraoperative liver biopsy was obtained. Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The MAFALDA study has been approved by the Local Research Ethics Committee (no. 16/20), and it was conducted in accordance with the principles of the Declaration of Helsinki. All participants gave written informed consent to the study.
The Helsinki cohort enrolled 343 consecutive individuals with morbid obesity eligible for bariatric surgery and 42 consecutive individuals with a BMI ≥25 kg m −2 undergoing liver biopsy for suspected MASH, all recruited between 2006 and 2018 at the Helsinki University Hospital, Helsinki, Finland. A week before the liver biopsy, participants underwent clinical examination and blood sampling as previously described . Liver histology was assessed according to the NASH CRN , , as described above. Individuals with alcohol consumption above 30/20 g per day in men/women were excluded from the analysis. The study was approved by the Local Research Ethics Committee at Helsinki University Hospital. All participants gave written informed consent to the study.
The UK Biobank is a large prospective cohort study recruiting approximately 500,000 participants (age 40–69 years) between 2006 and 2010 throughout the United Kingdom . Clinical and laboratory data were collected using highly standardized procedures. Medical diagnoses were obtained through linkage of hospital admissions, death and cancer registers from the National Health Service records (data fields 41270, 40001, 40002 and 40006). The UK Biobank study has been approved by the NorthWest Multicenter Research Ethics Committee (no. 21/NW/0157). All participants gave written informed consent to the study. Data used in this study were obtained under application number 37142. In the current study, we selected unrelated UK Biobank participants of European ancestry on the basis of our quality control pipeline, which has been described in detail previously , , , and we included individuals with BMI ≥25 kg m −2 and/or with type 2 diabetes as defined elsewhere . Participants were scanned at the UK Biobank Imaging Centre in Cheadle (United Kingdom) using a Siemens 1.5T MAGNETOM Aera as described in detail elsewhere , . Briefly, a shortened modified look locker inversion (ShMOLLI) was used to quantify liver T1, and a multi-echo-spoiled gradient echo was used to quantify liver iron and fat. Data were analyzed using LiverMultiScan Discover 4.0 software. Hepatic steatosis was defined by PDFF >5.5%) (ref. ), MASH by PDFF >5.5% and iron-corrected T1 mapping (cT1) by >800 ms (refs. , ).
Six variables associated with MASLD physiopathology and increased risk of MASH were selected for clustering in ABOS, namely, age, BMI, HbA1c, ALT, LDL cholesterol and circulating triglycerides. Cluster analysis and identification of MASLD subtypes were performed on 1,389 ABOS participants (Fig. ), after the exclusion of 54 patients for self-declaration alcohol consumption above 50/60 g per day for women and men, respectively, at the first visit, to avoid any risk of inclusion of patients with alcohol-related liver disease; 58 participants for a BMI ≤30 kg m −2 ; 27 participants for missing values in clustering traits (that is, age, BMI, HbA1c, ALT, LDL cholesterol and circulating triglycerides); and 17 participants having absolute standardized values of 5 or higher in at least one of the clustering traits (Extended Data Fig. ). The analysis was performed using the partitioning around medoids method in R (package ‘cluster’, version 2.1.4) , which is a more robust version of k -means clustering. Distances were computed as Euclidean distances using standardized variables scaled to a mean of 0 and a standard deviation of 1. To estimate the optimal number of clusters, we evaluated the silhouette widths for each clustering, varying the number of clusters going from three clusters to ten clusters. We determined the optimal number of clusters by choosing the configuration that yielded the highest silhouette coefficients, signifying well-delineated clusters whose members are closely related to one another and distinctly separate from individuals in other clusters. We then assessed the stability of the resulting clusters using the R function clusterboot from the fpc package (v.2.2-12), by resampling 2,000 times the original data and computing the Jaccard similarities of the original clusters to the most similar clusters in the resampled data. The mean (standard deviation) Jaccard-similarity measure was 0.73 (0.07) across all clusters. Data from the UZA, MAFALDA and Helsinki cohorts were normalized using ABOS values for centering and scaling. Then, participants were allocated to the cluster they were most similar to after the exclusion of participants having absolute standardized values of 5 or higher in at least one of the clustering traits, calculated as their Euclidean distance from the nearest cluster medoid derived from ABOS coordinates. Data from the UK biobank cohorts were normalized using ABOS values for centering and scaling. Participants were allocated to the cluster they were most similar to after the exclusion of those with self-reported history or medical diagnosis of other causes of liver disease, with a medical diagnosis of the target longitudinal outcome at baseline, or having absolute standardized values of 5 or higher in at least one of the clustering traits, calculated as their Euclidean distance from the nearest cluster medoid derived from ABOS coordinates. The Calinski–Harabasz Index was 263 for the ABOS cohort and reached 174 in the validation cohort, indicating well-defined clusters and confirming the transportability of the proposed stratification in diverse populations. In the UK Biobank cohort, encompassing a broader BMI range and less clinically extreme cases, the Calinski–Harabasz Index increases even further to 18,774, probably due to the larger and more diverse sample size.
As a potential aid for assisting clinicians in defining individual profiles of patients with MASLD, we developed an app ( https://ulr-metrics.univ-lille.fr/masldclusters/ ).
In the ABOS cohort, genotyping was available for 1,259 participants and was performed using the Illumina Infinium assay . This analysis was conducted at the SNO&SEQ Technology Platform, Molecular Medicine, BMC, Husargatan 3, Uppsala, Sweden. Results were analyzed using the software GenomeStudio 2.0.3. The following variants were assessed: PNPLA3 rs738409 C > G (p.I148M), TM6SF2 rs58542926 C > T (p.E167K), MBOAT7 rs641738 C > T and GCKR rs1260326 C > T (p.P446L). In the UK Biobank, genotyping was available for approximately 490,000 individuals and was performed using two similar genotyping arrays (that is Affymetrix UK BiLEVE and UK Biobank Axiom arrays) as described elsewhere . The following variants were assessed: PNPLA3 rs738409 C > G (p.I148M), TM6SF2 rs58542926 C > T (p.E167K), MBOAT7 rs641738 C > T and GCKR rs1260326 C > T (p.P446L). The PRS-HFC was computed according to the originally reported formula .
We analyzed the risk of developing hepatic and extrahepatic outcomes and overall mortality in the UK Biobank cohort. To estimate the incidence of liver outcomes, we selected 213,180 individuals without self-reported history or medical diagnosis of any liver disease (International Classification of Diseases 10th edition (ICD-10) B18, B19, C22.0, E83.0, E83.1, E88.0, I82.0, I85.0, I85.9, K70, K71, K72.1, K72.9, K74.1, K74.2, K74.3, K74.4, K74.5, K74.6, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.6, K76.7, K76.8, K76.9, K83.0, R18 and Z94.4) at baseline and identified those who developed chronic liver disease (ICD-10 C22.0, I85.0, I85.9, K70, K72.1, K72.9, K73, K74.0, K74.1, K74.2, K74.6, K76.0, K76.6, K76.7, K76.8, K76.9 and Z94.4) across the clusters. Participants were excluded from the analyses if they received a medical diagnosis of competing liver diseases (ICD-10 B18, B19, E83.0, E83.1, E88.0, I82.0, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5 and K83.0) before the diagnosis of liver outcome. To estimate the incidence of cardiovascular outcomes, we selected 195,739 individuals without self-reported history or medical diagnosis of chronic viral hepatitis (ICD-10 B18 and B19), other causes of liver disease (ICD-10 E83.0, E83.1, E88.0, I82.0, K70, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.8, K76.9 and K83.0) and cardiovascular disease (ICD-10 I20–I25, I60–I64, I69 and G45) at baseline, and identified those who developed cardiovascular disease across the clusters. To estimate the incidence of type 2 diabetes, we selected 196,791 individuals without self-reported history or medical diagnosis of chronic viral hepatitis (ICD-10 B18 and B19), other causes of liver disease (ICD-10 E83.0, E83.1, E88.0, I82.0, K70, K71, K74.3, K74.4, K74.5, K75.2, K75.3, K75.4, K75.8, K75.9, K76.5, K76.8, K76.9 and K83.0) and type 2 diabetes as defined elsewhere at baseline, and identified those who developed type 2 diabetes (ICD-10 E11 and E14) across the clusters. Detailed information about the UK Biobank methods and clinical diagnosis is provided in Supplementary Table .
Liver transcriptomic data were available for a subset of 831 participants from the ABOS cohort, as previously described . Total RNA was extracted from 30 mg frozen liver biopsies for Affymetrix microarray analysis using TRIzol reagent (Thermo Fisher Scientific), followed by purification on RNeasy columns (Qiagen). RNA purity and quantity were assessed using a Nanodrop spectrometer (Thermo Fisher Scientific). RNA integrity was quantified using the Agilent RNA6000 Nano assay and an Agilent 2100 BioAnalyzer. Raw data from Affymetrix microarrays were first processed with robust multi-array average (RMA) with GC correction and scale intensities (CG-RMA-scale) as a normalization method.
In the ABOS cohort, nontargeted global metabolomic analysis was performed on plasma samples in 1,322 participants by Metabolon, using two independent platforms: ultrahigh performance liquid chromatography/tandem mass spectrometry optimized for basic species or acidic species, and gas chromatography–mass spectrometry. Raw data for metabolomics were transformed using log transformation and imputation with minimum observed values for each compound.
Data were reported as median (interquartile range) for continuous variables and frequencies (percentages) for categorical variables. Clusters were compared using the Kruskal–Wallis test, chi-squared test or Fisher’s exact test, as appropriate. Raw P values were adjusted for multiple testing separately for clinical data, histological data and genetic data. To control the family-wise error rate, the Bonferroni method was used. Differences were considered statistically significant when adjusted P value(s) were less than 0.05. For statistically significant variables, post hoc analysis was performed comparing pairwise MASH-enriched MASLD clusters (2 and 5) and the combined nonenriched MASLD clusters (1, 3, 4 and 6) using the Dunn test, chi-squared test or Fisher’s exact test, as appropriate, with Bonferroni adjustment. Differential analysis of liver transcriptomic across the clusters was performed using moderated t -tests from the R Bioconductor package Limma v.3.60.4. The same methodology was also applied to metabolomic after exclusion of xeniobiotics. Differences were considered statistically significant when P value(s) adjusted for multiple comparisons using the Benjamini–Hochberg correction (to control the false discovery rate) were less than 0.05 and the absolute value of log 2 fold change was greater than 0.26. Group comparisons for genes were represented using volcano plots. The number of differentially expressed genes between the various clusters were reported through Euler diagrams. Pathway enrichment on the transcriptome was performed with the R package ClusterProfiler (v.4.7.1), based on GO-BP pathways. The GSEA method was run with the absolute value of the moderated t -test statistic as ranking metric. The P values of enriched pathways were adjusted using the Benjamini–Hochberg procedure, and an adjusted P value <0.05 was considered significant. In the UK Biobank, clusters were compared using analysis of variance, Kruskal–Wallis test, chi-square test or Fisher’s test as appropriate, adjusted for multiple testing separately for clinical data and genetic data, using the Bonferroni method. Similarly, post hoc comparisons were carried out with Bonferroni correction. The incidence of chronic liver disease, cardiovascular disease and type 2 diabetes were defined as the composite occurrence of the clinical event or event-related death during follow-up. Then, the cumulative incidence of the clinical outcomes was computed according to the Aalen–Johansen method for chronic liver disease, cardiovascular disease and type 2 diabetes, taking into account the competing occurrence of other-cause death, and of selected liver disease (only in the case of chronic liver disease; see above for ICD-10 codes). Cause-specific HRs were calculated through Cox regressions, adjusted for age, sex and alcohol intake. The proportional hazard assumption was verified through the inspection of the Schoenfeld residuals. Sensitivity analyses were performed (1) including only individuals with BMI ≥27 kg m −2 and (2) excluding those with harmful alcohol consumption (>50/60 g per day for women/men). Statistical analyses and graphical representations were performed using R statistical software v.4.4.1 (R Foundation for Statistical Computing, Vienna, Austria).
Further information on research design is available in the linked to this article.
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41591-024-03283-1.
Supplementary Information Supplementary Table 1. The liver gene expression and plasma metabolites with significant differences between the cardiometabolic and liver-specific clusters (A), cardiometabolic and control (B) and liver-specific and control (C). Metabolites with significant differences between the cardiometabolic and liver-specific clusters (D), cardiometabolic and control (E) and liver-specific and control (F). Molecular features that were differentially expressed between type 2 diabetes and non-T2D groups, and between cardiometabolic and control clusters (G). Supplementary Table 2. Definition of self-reported history of liver disease, cardiovascular disease and type 2 diabetes (UK Biobank data-field 20002) and ICD-10 codes used to define liver disease, cardiovascular disease and type 2 diabetes. Reporting Summary
|
Urine proteomics-based analysis identifies CHI3L1 as an immune marker and potential therapeutic target for bladder cancer | 55bfe1eb-066a-4ecf-ad7d-059037d8c58b | 11830209 | Biochemistry[mh] | Bladder cancer (BCa) stands as the most prevalent malignant tumor affecting the urinary system, ranking 10th and 13th in incidence and mortality rates among malignant tumors, respectively . Data from the International Agency for Research on Cancer (IARC) in 2020 revealed a concerning upward trend in the incidence and mortality of bladder cancer, now positioning it as the second most common genitourinary tumor in men, following prostate cancer . Bladder uroepithelial carcinoma represents the predominant histologic type of bladder cancer . BCa is categorized into non-muscle invasive bladder cancer (NMIBC) and muscle-invasive bladder cancer (MIBC) based on the depth of infiltration . MIBC patients exhibit a relatively low 5-year survival rate and a less favorable prognosis compared to NMIBC patients . Notably, tumors larger than 3 cm are identified as risk factors for recurrence and progression . Despite significant advancements in early BCa detection, the mortality rate among BCa patients has shown limited improvement . In the case of metastatic BCa, postoperative chemotherapy stands as the primary treatment modality. However, the efficacy of current treatments is hindered by intra-tumor heterogeneity and chemotherapy resistance. Consequently, there is a pressing need to investigate novel marker targets elucidating the mechanisms of BCa progression and chemosensitivity. This exploration is essential to advance the prospects of precision therapy in the management of BCa. Previous studies have underscored that tumor development hinges not only on the genetic alterations within tumor cells but also on the influential role of the tumor microenvironment (TME). Abundant tumor-infiltrating immune cells, including tumor-associated macrophages (TAMs) and various lymphocytes, populate the TME. These immune cells exert influence on tumor progression by releasing cytokines and growth factors that foster cancer cell proliferation, survival, motility, and invasion . TAMs, constituting a significant portion of infiltrating immune cells in the TME, are linked to an unfavorable prognosis in tumors, encompassing both M1 and M2 types . The specific polarization phenotype of TAMs depends on the stage of tumor progression. In the early stages of carcinogenesis, known as the tumor elimination stage, local chronic inflammation tends to occur, leading to the polarization of TAMs toward the M1 type under the influence of cytokines and chemokines in the TME . Conversely, as cancer advances into the stage of tumor immune escape, alterations in factors secreted by tumor cells and mesenchymal stromal cells within the TME drive the polarization of TAMs toward the M2 type. M2 macrophages, in turn, facilitate tumor growth and metastasis through various pathways, including immune response suppression, promotion of tumor angiogenesis and lymphangiogenesis, and enhanced invasion . Moreover, it has been demonstrated that M2 macrophages can interact with cancer cells, contributing to the promotion of chemoresistance in tumors . Therefore, investigating the cancer-promoting mechanisms of M2 macrophages emerges as a promising avenue for research in the treatment of BCa. In this study, we conducted a multistep analysis, integrating bioinformatics and urine proteomics, to identify key genes associated with the progression and prognosis of BCa. We further explored the potential roles and mechanisms of these key genes using techniques such as GO and KEGG analysis. Validation of the key genes' roles was achieved through in vitro cellular experiments. Additionally, we investigated the correlation between these key genes and macrophages using the TIMER database. Consequently, this study contributes significantly to the identification of reliable novel biomarkers for BCa and elucidates the underlying molecular mechanisms. These findings hold promise for enhancing the diagnosis, prognosis, and targeted therapy of BCa.
Data collection We gathered data on the expression, clinicopathological characteristics, and outcomes of 431 patients with BCa from the TCGA database. In adherence to the Declaration of Helsinki and ethical guidelines set by the Institutional Medical Ethics Committee of Lanzhou University Second Hospital, we collected fresh urine samples from five BCa patients and five healthy individuals. Informed consent was secured from all participants, and the pathological diagnosis of BCa was independently verified by at least two qualified pathologists. The GSE49240 dataset, encompassing the gene expression profile of monocyte-derived macrophages isolated from buffy coats of blood donors, was downloaded from the GEO database. LC–MS/MS analysis The LC–MS/MS analysis was conducted using a Q Exactive mass spectrometer (Thermo Scientific) coupled to an Easy nLC (Proxeon Biosystems, now Thermo Fisher Scientific). Mobile phase A comprised 2% ACN and 0.1% Formic acid and mobile phase B consisted of 80% ACN and 0.1% Formic acid. The analytical column, an Acclaim PepMap C18 nanocolumn (75 μm × 50 cm, 2 μm, 100 Å), operated at a flow rate of 300 nl/min. Both the trap and nanoflow column were maintained at 35 °C. The elution of samples followed a gradient, starting at 1% B for 5 min, increasing to 5% B at 10 min, reaching 25% B at 360 min, and finally, 65% B at 480 min. Data-dependent acquisition, with full scans in the 100–1700 m/z range, was performed using an Orbitrap mass analyzer at a resolution of 140,000 at 200 m/z. The most intense precursor ions from a survey scan were selected for MS/MS fragmentation utilizing higher energy collision dissociation (HCD) fragmentation, with 35% normalized collision energy. MS raw data from each sample were analyzed for identification and quantitative assessment using the MASCOT engine (Matrix Science, London, UK; version 2.2) integrated into Proteome Discoverer 1.4. Screening DEGs To identify M2 macrophage genes associated with the progression of bladder cancer, we employed three machine learning algorithms for feature selection. These included the Least Absolute Shrinkage and Selection Operator (LASSO), Recursive Feature Elimination (RFE) in conjunction with a Support Vector Machine (SVM), and the Random Forest (RF) algorithm.LASSO algorithm, a regularized regression method suitable for feature selection, was implemented with the parameters family = "binomial", nfolds = 10, and a tuning/penalty parameter by the 'glmnet' package. RFE-SVM algorithm, which is utilized in the context of supervised machine learning for both feature selection and classification, was conducted with the specifications rfeControl, method = "svmRadial", and trainControl, as facilitated by the 'caret' package. Additionally, RF algorithm was executed using the 'VSURF' package with the parameters ntree.thres = 500, nfor.thres = 20, and RFimplem = "randomForest". Ultimately, the overlapping genes identified through these analyses were selected for further investigation. Functional analysis We utilized R version 3.6.3 to acquire Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, aiming to explore the biological functions associated with DEGs correlated with BCa. The visualization of GO terms and KEGG pathways was facilitated through the ggplot2 package with the support of the cluster profile package. The analysis of DEGs encompassed the assessment of their involvement in Biological Process (BP), Cell Component (CC), and Molecular Function (MF) categories. Different clinical characteristics analysis Based on the median expression level of CHI3L1, patients with complete clinical data obtained from the TCGA database were stratified into high-expression and low-expression groups of CHI3L1. Subsequently, the chi-squared test was employed to assess the correlations between CHI3L1 and clinical symptoms of BCa, with a significance threshold set at p < 0.05. Cell culture The immortalized human normal bladder epithelial cell line SV-HUC-1 and human BCa cell lines UMUC3, 5637, T24, and J82 were procured from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). These cells were cultured in either RPMI 1640 media or DMEM high-glucose medium, supplemented with 10% heat-inactivated fetal bovine serum (FBS), 1% penicillin, and 1% streptomycin. Specifically, UMUC3, 5637, T24, and J82 cells were cultured in mixed RPMI medium 1640, while SV-HUC-1 cells were cultured in mixed DMEM high-glucose medium. The cell cultures were maintained in a 37 °C incubator with a 5% CO 2 humidified environment. Cell transfection J82 and UMUC3 cells were seeded into 6-well plates at a density of 3 × 10^5 cells per well. The following day, the culture medium was replaced with complete culture medium supplemented with 10 μg/mL polybrene. Subsequently, CHI3L1 interfering and negative control lentiviruses were introduced into the J82 and UMUC3 cells according to the lentiviral instruction manual. These lentiviruses were denoted as interfering (sh-CHI3L1) and negative control (shControl), respectively. After 24 h, the culture medium was changed to 1640 culture medium supplemented with 10% fetal bovine serum. Additionally, 2 μg/mL puromycin was added to select cells after they had recovered from their growth conditions. This facilitated the screening of cells, resulting in the acquisition of cell lines exhibiting stable interference with CHI3L1 expression. CCK-8 assay UMUC3, T24, and J82 cells were seeded at a density of 2,000 cells per well in 96-well plates. After 24 h of culturing at 37 °C in a 5% CO 2 environment, the cells were allocated into distinct groups, each receiving different treatments, with each group having a minimum of 3 repetitions. Subsequently, each well was incubated with 10 μL/well CCK-8 solution for 2 h daily. Changes in the cell proliferation ability of each group were assessed by measuring the optical density at 450 nm using a microplate reader. The IC50 value of PTX was then calculated based on the CCK-8 assay data. Wound healing assay Cell migration was assessed through wound healing assays. BCa cells were seeded in a 6-well plate at a density of 2 × 10^5 cells/well and cultured for 24 h. Upon reaching 90% confluence, a scratch was created using a sterile 200 µL pipette tip in a confluent cell monolayer. The cells were then washed with PBS and incubated in a serum-free medium. Subsequently, the cells were treated with half of the IC50 concentrations of PTX. Photographic documentation of the scratches was performed at 0 h and 24 h. The experiment was conducted in three sessions and repeated five times. Migrated cell images were observed and analyzed using an Olympus light microscope and Image J software (NIH). Transwell migration and invasion assays To evaluate cell migration and invasion in vitro, 24 well-transwell chambers with or without matrix coating were employed. The lower chamber of the 24-well plates was filled with medium containing 10% fetal bovine serum, while 5 × 10^4 cells in serum-free medium were added to the upper chamber of the transwell for a 24-h culture period. Subsequently, the cells in the lower chamber were fixed with 4% paraformaldehyde for 20 min, stained with 0.1% crystal violet at room temperature for 20 min, photographed, and counted using an inverted microscope. Colony formation assay Transfected cells were seeded into 6-well plates (Corning, United States) at a density of 1000 cells per well and cultured in complete medium for approximately 2 weeks. Culture termination occurred upon the visible formation of colonies in the dish. After 2 weeks, the colonies were fixed with formaldehyde and stained with 0.1% crystal violet (Vicmed, China). Following staining, the colonies were photographed and counted. Western blot analysis The cells from each group in the logarithmic growth phase were harvested, lysed using radioimmunoprecipitation assay buffer (RIPA), and then the protein was extracted. The protein concentration was determined using the BCA Protein Assay Kit. Subsequently, 40 μg of protein from each group was separated via gel electrophoresis and transferred onto a polyvinylidene fluoride (PVDF) membrane. The membrane was then blocked with 5% skimmed milk powder for 1 h, followed by overnight incubation at 4°C with the corresponding antibody. After three washes, the membrane was incubated with a secondary antibody for 2 h at room temperature, and protein expression was detected using chemiluminescence. Xenograft tumor model Animal experiments were permitted by Lanzhou University Second Hospital, School of Medicine, lanzhou University. The male nude mice were purchased from Chengdu GemPharmatech Company. Sh-Control (2 × 10 6 cell) and sh-CHI3L1 (2 × 10 6 cell) UMUC3 cells were injected into the right axilla of nude mice. When the tumor volume approached 1500mm 3 , the mice were euthanized by cervical dislocation according to the AVMA guidelines for animal euthanasia published by the American Veterinary Medical Association. Tumor volume = (L × W2)/2. Immunohistochemistry (IHC) We dewaxed the tissue sections with xylene and hydrated them in ethanol. We then used EDTA for antigen retrieval. To block endogenous enzymes, we treated the sections with hydrogen peroxide. The sections were incubated overnight at 4°C with the primary antibody (1:200). The next day, we applied a secondary antibody (polyperoxidase-anti-rabbit IgG) for 30 min at room temperature. We developed the staining using DAB (diaminobenzidine) and counterstained with hematoxylin. Statistical analysis Data were analyzed by GraphPad Prism and expressed as mean ± SD. One-way analysis of variance (ANOVA) followed by the Tukey–Kramer test and post hoc analysis was used to assess differences between groups. A p -value < 0.05 was considered statistically significant. All experimental results were repeated at least three times in independent experiments.
We gathered data on the expression, clinicopathological characteristics, and outcomes of 431 patients with BCa from the TCGA database. In adherence to the Declaration of Helsinki and ethical guidelines set by the Institutional Medical Ethics Committee of Lanzhou University Second Hospital, we collected fresh urine samples from five BCa patients and five healthy individuals. Informed consent was secured from all participants, and the pathological diagnosis of BCa was independently verified by at least two qualified pathologists. The GSE49240 dataset, encompassing the gene expression profile of monocyte-derived macrophages isolated from buffy coats of blood donors, was downloaded from the GEO database.
The LC–MS/MS analysis was conducted using a Q Exactive mass spectrometer (Thermo Scientific) coupled to an Easy nLC (Proxeon Biosystems, now Thermo Fisher Scientific). Mobile phase A comprised 2% ACN and 0.1% Formic acid and mobile phase B consisted of 80% ACN and 0.1% Formic acid. The analytical column, an Acclaim PepMap C18 nanocolumn (75 μm × 50 cm, 2 μm, 100 Å), operated at a flow rate of 300 nl/min. Both the trap and nanoflow column were maintained at 35 °C. The elution of samples followed a gradient, starting at 1% B for 5 min, increasing to 5% B at 10 min, reaching 25% B at 360 min, and finally, 65% B at 480 min. Data-dependent acquisition, with full scans in the 100–1700 m/z range, was performed using an Orbitrap mass analyzer at a resolution of 140,000 at 200 m/z. The most intense precursor ions from a survey scan were selected for MS/MS fragmentation utilizing higher energy collision dissociation (HCD) fragmentation, with 35% normalized collision energy. MS raw data from each sample were analyzed for identification and quantitative assessment using the MASCOT engine (Matrix Science, London, UK; version 2.2) integrated into Proteome Discoverer 1.4.
To identify M2 macrophage genes associated with the progression of bladder cancer, we employed three machine learning algorithms for feature selection. These included the Least Absolute Shrinkage and Selection Operator (LASSO), Recursive Feature Elimination (RFE) in conjunction with a Support Vector Machine (SVM), and the Random Forest (RF) algorithm.LASSO algorithm, a regularized regression method suitable for feature selection, was implemented with the parameters family = "binomial", nfolds = 10, and a tuning/penalty parameter by the 'glmnet' package. RFE-SVM algorithm, which is utilized in the context of supervised machine learning for both feature selection and classification, was conducted with the specifications rfeControl, method = "svmRadial", and trainControl, as facilitated by the 'caret' package. Additionally, RF algorithm was executed using the 'VSURF' package with the parameters ntree.thres = 500, nfor.thres = 20, and RFimplem = "randomForest". Ultimately, the overlapping genes identified through these analyses were selected for further investigation.
We utilized R version 3.6.3 to acquire Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, aiming to explore the biological functions associated with DEGs correlated with BCa. The visualization of GO terms and KEGG pathways was facilitated through the ggplot2 package with the support of the cluster profile package. The analysis of DEGs encompassed the assessment of their involvement in Biological Process (BP), Cell Component (CC), and Molecular Function (MF) categories.
Based on the median expression level of CHI3L1, patients with complete clinical data obtained from the TCGA database were stratified into high-expression and low-expression groups of CHI3L1. Subsequently, the chi-squared test was employed to assess the correlations between CHI3L1 and clinical symptoms of BCa, with a significance threshold set at p < 0.05.
The immortalized human normal bladder epithelial cell line SV-HUC-1 and human BCa cell lines UMUC3, 5637, T24, and J82 were procured from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). These cells were cultured in either RPMI 1640 media or DMEM high-glucose medium, supplemented with 10% heat-inactivated fetal bovine serum (FBS), 1% penicillin, and 1% streptomycin. Specifically, UMUC3, 5637, T24, and J82 cells were cultured in mixed RPMI medium 1640, while SV-HUC-1 cells were cultured in mixed DMEM high-glucose medium. The cell cultures were maintained in a 37 °C incubator with a 5% CO 2 humidified environment.
J82 and UMUC3 cells were seeded into 6-well plates at a density of 3 × 10^5 cells per well. The following day, the culture medium was replaced with complete culture medium supplemented with 10 μg/mL polybrene. Subsequently, CHI3L1 interfering and negative control lentiviruses were introduced into the J82 and UMUC3 cells according to the lentiviral instruction manual. These lentiviruses were denoted as interfering (sh-CHI3L1) and negative control (shControl), respectively. After 24 h, the culture medium was changed to 1640 culture medium supplemented with 10% fetal bovine serum. Additionally, 2 μg/mL puromycin was added to select cells after they had recovered from their growth conditions. This facilitated the screening of cells, resulting in the acquisition of cell lines exhibiting stable interference with CHI3L1 expression.
UMUC3, T24, and J82 cells were seeded at a density of 2,000 cells per well in 96-well plates. After 24 h of culturing at 37 °C in a 5% CO 2 environment, the cells were allocated into distinct groups, each receiving different treatments, with each group having a minimum of 3 repetitions. Subsequently, each well was incubated with 10 μL/well CCK-8 solution for 2 h daily. Changes in the cell proliferation ability of each group were assessed by measuring the optical density at 450 nm using a microplate reader. The IC50 value of PTX was then calculated based on the CCK-8 assay data.
Cell migration was assessed through wound healing assays. BCa cells were seeded in a 6-well plate at a density of 2 × 10^5 cells/well and cultured for 24 h. Upon reaching 90% confluence, a scratch was created using a sterile 200 µL pipette tip in a confluent cell monolayer. The cells were then washed with PBS and incubated in a serum-free medium. Subsequently, the cells were treated with half of the IC50 concentrations of PTX. Photographic documentation of the scratches was performed at 0 h and 24 h. The experiment was conducted in three sessions and repeated five times. Migrated cell images were observed and analyzed using an Olympus light microscope and Image J software (NIH).
To evaluate cell migration and invasion in vitro, 24 well-transwell chambers with or without matrix coating were employed. The lower chamber of the 24-well plates was filled with medium containing 10% fetal bovine serum, while 5 × 10^4 cells in serum-free medium were added to the upper chamber of the transwell for a 24-h culture period. Subsequently, the cells in the lower chamber were fixed with 4% paraformaldehyde for 20 min, stained with 0.1% crystal violet at room temperature for 20 min, photographed, and counted using an inverted microscope.
Transfected cells were seeded into 6-well plates (Corning, United States) at a density of 1000 cells per well and cultured in complete medium for approximately 2 weeks. Culture termination occurred upon the visible formation of colonies in the dish. After 2 weeks, the colonies were fixed with formaldehyde and stained with 0.1% crystal violet (Vicmed, China). Following staining, the colonies were photographed and counted.
The cells from each group in the logarithmic growth phase were harvested, lysed using radioimmunoprecipitation assay buffer (RIPA), and then the protein was extracted. The protein concentration was determined using the BCA Protein Assay Kit. Subsequently, 40 μg of protein from each group was separated via gel electrophoresis and transferred onto a polyvinylidene fluoride (PVDF) membrane. The membrane was then blocked with 5% skimmed milk powder for 1 h, followed by overnight incubation at 4°C with the corresponding antibody. After three washes, the membrane was incubated with a secondary antibody for 2 h at room temperature, and protein expression was detected using chemiluminescence.
Animal experiments were permitted by Lanzhou University Second Hospital, School of Medicine, lanzhou University. The male nude mice were purchased from Chengdu GemPharmatech Company. Sh-Control (2 × 10 6 cell) and sh-CHI3L1 (2 × 10 6 cell) UMUC3 cells were injected into the right axilla of nude mice. When the tumor volume approached 1500mm 3 , the mice were euthanized by cervical dislocation according to the AVMA guidelines for animal euthanasia published by the American Veterinary Medical Association. Tumor volume = (L × W2)/2.
We dewaxed the tissue sections with xylene and hydrated them in ethanol. We then used EDTA for antigen retrieval. To block endogenous enzymes, we treated the sections with hydrogen peroxide. The sections were incubated overnight at 4°C with the primary antibody (1:200). The next day, we applied a secondary antibody (polyperoxidase-anti-rabbit IgG) for 30 min at room temperature. We developed the staining using DAB (diaminobenzidine) and counterstained with hematoxylin.
Data were analyzed by GraphPad Prism and expressed as mean ± SD. One-way analysis of variance (ANOVA) followed by the Tukey–Kramer test and post hoc analysis was used to assess differences between groups. A p -value < 0.05 was considered statistically significant. All experimental results were repeated at least three times in independent experiments.
Identification of DEGs in BCa urine samples and enrichment analysis A total of 760 DEGs were identified in fresh urine samples obtained from 5 BCa patients and 5 healthy individuals (Fig. A). Heatmaps show the expression of the top 50 genes out of 398 up-regulated genes and 362 down-regulated genes in urine proteomics in tumor samples (Fig. B-C) and normal samples (Fig. D-E). To explore the potential functions of these 760 DEGs, we conducted GO enrichment analysis, covering biological processes (BP), cellular components (CC), and molecular functions (MF). The analysis revealed that biological processes were mainly enriched in lymphocyte mediated immunity, cell-substrate adhesion, and urogenital system development (Fig. F). Cellular components included focal adhesion, cell-substrate junction, and immunoglobulin complex (Fig. G). Molecular functions were primarily associated with immunoglobulin receptor binding, cadherin binding, and cytokine binding (Fig. H). Subsequently, KEGG pathway enrichment analysis indicated associations with Cell adhesion molecules, Proteoglycans in cancer, and PI3K-Akt signaling pathway (Fig. I). Identification of CHI3L1 as the key gene In this analysis, we segregated the TCGA-BLCA dataset into two cohorts: BLCA and normal control groups. A comparative differential gene expression analysis was performed between these two groups utilizing the Wilcoxon rank-sum test, with a significance threshold set at |logFC|> 0.2 and p < 0.01 to delineate differentially expressed genes (Fig. A). Thereafter, we scrutinized the GSE49240 dataset, culminating in the identification of a total of 502 differential genes pertinent to M2 macrophages, comprising 277 upregulated and 225 downregulated genes (Fig. B). A Venn diagram analysis was conducted to delineate the commonalities within the spectrum of DEGs, yielding 19 genes that are intimately associated with M2 macrophages in the context of bladder cancer: A2M, TFRC, CA2, ACO1, DBI, CHI3L1, CAT, CYBRD1, TALDO1, APEH, RFNG, RTN4R, TFPI, BIN1, KDM6B, MFAP4, ICAM2, OLR1, and THBS1 (Fig. C). In an effort to refine the selection of DEGs correlated with the progression of bladder cancer, the TCGA-BLCA dataset was stratified into high-grade (T3/4) and low-grade (T1/2) subgroups. Subsequently, three machine learning algorithms (LASSO, RFE-SVM, and RF) were applied to the DEGs for feature selection. The SVM algorithm demonstrated optimal accuracy when the variable number was 13 (Fig. D-F), and the pairwise correlation among these 19 genes was computed. The RF and LASSO methods each identified a subset of genes, with 4 and 5 genes selected respectively (Fig. G-K). Finally, the feature genes shared by LASSO, RFE-SVM, and RF algorithms were selected (Fig. L): CYBRD1, CHI3L1, and TFRC. To ascertain the independent prognostic impact of individual genes, we performed univariate COX regression and Kaplan Meier analysis (Fig. M-O) on these three genes. Consequently, we identified CHI3L1 as the pivotal key gene. CHI3L1 is associated with different clinical characteristics in BCa In our further analysis, we delved into the association between CHI3L1 expression and various clinical characteristics of BCa patients. Notably, we identified a significant correlation between CHI3L1 expression and age and gender. Specifically, CHI3L1 expression was markedly higher in patients aged over 70 years and in female patients (Additional file: Fig. S1A-B). However, we did not observe a significant correlation between the expression level of CHI3L1 and the smoking status of the patients (Additional file: Fig. S1C). Moreover, in the context of TNM staging, elevated CHI3L1 expression exhibited a positive correlation with T-stage and N-stage, although no such correlation was observed with M-stage (Additional file: Fig. S1D-F). Additionally, CHI3L1 expression demonstrated higher levels in non-papillary tumors, more advanced histologic staging, and more advanced pathological staging (Additional file: Fig. S1G-I). CHI3L1 inhibits proliferation, migration, and invasion of BCa cells in vitro The lentivirus was taken to decrease CHI3L1 expression in BCa cells, and the transfection efficiency was confirmed by western blot analysis (Fig. A). A CCK-8 assay was conducted to investigate whether CHI3L1 affected BCa cell viability. BCa cell viability was inhibited in both cell lines by downregulating CHI3L1 (Fig. B). Colony formation assay indicated that CHI3L1 knockdown significantly inhibited the proliferation ability of J82 and UMUC3 (Fig. C). The immunofluorescence staining revealed that the Ki67 was also decreased in the CHI3L1 deficiency group than in the control group (Additional file: Fig. S2A). Moreover, the wound healing and transwell assays further revealed that decreased CHI3L1 expression can obviously inhibit cell migration and invasion ability in J82 and UMUC3 cells (Fig. D-F). Consistent with these results, the immunofluorescence staining showed that the E-cadherin was increased after CHI3L1 downregulation (Additional file: Fig. S2B). Additionally, western blot results indicated that the E-cadherin level was significantly increased, and the N-cadherin levels were significantly decreased compared with the control group (Additional file: Fig. S2C). Meanwhile, the CCK-8 results demonstrated a significant increase in sensitivity of BCa cells to gemcitabine after downregulation of CHI3L1 (Additional file: Fig. S2D-E). PTX attenuates BCa cell proliferation, migration, and invasion through inhibition of CHI3L1 To ascertain the role of CHI3L1 in BCa development, we conducted experiments employing specific inhibitors. Initially, we utilized pentoxifylline (PTX), an FDA-approved drug recognized as a specific inhibitor of CHI3L1 . We detected and found that PTX inhibited the growth of J82 and UMUC3, with IC50s of 6.95 mM and 3.05 mM for J82 and UMUC3 respectively (Fig. A). Moreover, after PTX treatment, BCa cells gradually became elongated or small rounded, and detached (Fig. B). Subsequent experiments utilizing the CCK-8 and colony formation assay demonstrated significant inhibition of proliferation in J82 and UMUC3 cells following 48 h of PTX treatment (Fig. C-D). To assess PTX’s impact on BCa cell migration and invasion, we conducted wound healing and Transwell experiments. In the transwell assays confirmed the inhibitory effects of PTX on both BCa cell migration and invasion (Fig. E-F). Similarly, wound healing assays showed PTX treatment effectively restrained the migration of J82 and UMUC3 cells (Fig. G-H). PTX enhances the inhibitory effect of GEM on BCa cell activity Currently, GEM is an important component of first-line chemotherapy for the treatment of malignancies such as BCa, but the disease remission rate for muscle-invasive bladder cancer or metastatic bladder cancer is only 49%, and it remains prone to recurrence and drug resistance. To further investigate whether CHI3L1 is associated with GEM drug sensitivity in BCa cells, we first exposed J82, UMUC3, and T24 BCa cells to different concentrations of GEM for 48 h, and then performed a CCK-8 assay to evaluate the cell activity. The IC50 of J82, UMUC3, and T24 cells were 0.55 μM, 1.11 μM, and 1.21 μM, respectively, as calculated by Graphpad (Fig. A-C). To further optimize the response of BCa cells to GEM, any possible effects of GEM in combination with PTX on cell activity were investigated. The appropriate concentration of GEM was set at half the IC50 based on the results of the CCK-8 analysis. Cells were then exposed to GEM with or without PTX for 48 h. In all three cell lines, cell activity was significantly lower in the PTX + GEM group than in the GEM group (Fig. D-F). CHI3L1 regulated BCa progression via PI3K/AKT signaling pathway To unravel the potential mechanisms of CHI3L1 in tumorigenesis and progression, we performed a comprehensive analysis of proteins interacting with CHI3L1 and genes associated with CHI3L1 using STRING and GEPIA2. In the Protein–Protein Interaction (PPI) network, 50 proteins experimentally interacting with CHI3L1 were identified (Additional file: Fig. S3A). Additionally, we pinpointed the top 100 genes most positively associated with CHI3L1, with SULF1, CCN4, GFPT2, BCL2A1, CCL11, and TNFAIP6 being the top 6 genes (Additional file: Fig. S3B). By intersecting genes directly interacting with or related to CHI3L1 using Venn analysis, we identified four genes: IL10, CD163, CCL18, and POSTN (Additional file: Fig. S3C). Notably, CD163 is a marker for M2-type macrophages , and CCL18 is a member of the CC chemokine family . GO enrichment analysis using both datasets indicated that genes directly interacting or associated with CHI3L1 were primarily related to macrophage activation, mononuclear cell proliferation, and regulation of chemotaxis. KEGG pathway enrichment analysis revealed associations with PI3K-AKT signaling, NF-kappa B signaling, ECM-receptor interaction, Bladder cancer, and Chemokine signaling pathways (Additional file: Fig. S3D). To elucidate the role of CHI3L1 in BCa, TCGA-BLCA samples were stratified into high- and low-expression groups based on the median expression of CHI3L1. Subsequent differential analyses identified 3877 DEGs and volcano plots were employed to visually depict the distribution of all these genes (Fig. A). To comprehend the potential functions of these 3877 genes, we conducted GO enrichment analysis. The results revealed that BP was primarily involved in humoral immune response, phagocytosis, macrophage activation, and tumor necrosis factor production (Fig. B). CC was associated with immunoglobulin complex, vesicle lumen, basement membrane, interstitial matrix, and endocytic vesicle (Fig. C). MF included immunoglobulin receptor binding, cytokine activity, chemokine activity, complement receptor activity, and fibronectin binding (Fig. D). Subsequent analysis using KEGG enrichment revealed CHI3L1 significant associations with pathways such as Cytokine-cytokine receptor interaction, JAK-STAT signaling pathway, PI3K-AKT signaling pathway, Focal adhesion, and PPAR signaling pathway, among others (Fig. E). We then performed GSEA enrichment analysis and found that CHI3L1 was mainly associated with PI3K-AKT signaling pathway, Inflammatory Response Pathway, Apoptosis, Pathways In Cancer, and Ecm Receptor Interaction (Fig. F). In summary, the analysis suggests that CHI3L1, strongly associated with immune regulation such as macrophage activation, may contribute to the development of BCa through signaling pathways like PI3K-AKT. To further explore the function of CHI3L1 for PI3K-AKT pathway in bladder cancer cells, the protein expression levels of PI3K, p-PI3K, AKT and p-AKT were detected by a western blot assay. The knockdown of CHI3L1 remarkably induced the proein expression of p-PI3K and p-AKT (Fig. G-H), These results indicated that CHI3L1 might influence bladder cancer cell function through its regulation on the PI3K-AKT signaling pathway. Downregulated CHI3L1 expression suppresses tumor growth in vivo We have demonstrated that CHI3L1 knockdown significantly inhibits BCa cell proliferation, metastatic ability, and gemcitabine resistance. To further investigate the effect of CHI3L1 on the proliferative ability of BCa in vivo, we injected control and CHI3L1-stable knockdown UMUC3 BCa cells subcutaneously into BALB/c nude mice. This allowed us to construct a subcutaneous tumor-loaded nude mouse model. We subsequently observed tumor formation, as well as tumor volume and weight in both groups of nude mice (Fig. A-B). The results showed that the tumor volume was significantly reduced in the CHI3L1-stable knockdown group compared to the control group (Fig. C). Additionally, the tumor weight was significantly lower in the CHI3L1-stable knockdown group (Fig. D). Meanwhile, immunohistochemistry staining showed a decrease in CHI3L1, KI67, and p-AKT expression, and an increase in E-cadherin expression in the sh-CHI3L1 group compared to the control group (Fig. E). Collectively, our findings suggest that the decreased CHI3L1 expression leads to inactivation of AKT signaling in BCa, which impedes BCa progression. Correlation of CHI3L1 expression with immune characteristics To further explore the correlation between CHI3L1 and tumor immune infiltration, this study utilized the TIMER database to analyze immune infiltration in BCa, stratifying patients based on high and low CHI3L1 expression levels. The results indicated that BCa patients in the high CHI3L1 expression group exhibited significantly elevated infiltration levels of aDC, B cells, CD8 + T cells, cytotoxic cells, DC, eosinophils, iDC, macrophages, mast cells, neutrophils, NK CD56 dim cells, NK cells, pDC, T cells, Tem, TFH, Tgd, Th1 cells, Th2 cells, and Treg compared to BCa patients with low CHI3L1 expression. Conversely, high CHI3L1-expressing BCa patients demonstrated significantly lower infiltration of NK CD56 bright cells and Th17 cells compared to their low CHI3L1-expressing counterparts (Fig. A-B). We conducted an analysis to investigate the correlation between CHI3L1 expression levels and immune infiltration in BCa. The strongest correlation with CHI3L1 was observed in macrophages ( r = 0.47, p < 0.001) (Fig. C), aligning with our earlier findings. Additionally, CHI3L1 expression exhibited positive correlations with the infiltration levels of NK cells ( r = 0.438, p < 0.001) (Fig. D), neutrophils ( r = 0.627, p < 0.001) (Fig. E), B cells ( r = 0.558, p < 0.001) (Fig. F), T cells ( r = 0.505, p < 0.001) (Fig. G), Th1 cells ( r = 0.680, p < 0.001) (Fig. H), Th2 cells ( r = 0.346, p < 0.001) (Fig. I), CD8 + T cells ( r = 0.295, p < 0.001) (Fig. J), eosinophils ( r = 0.349, p < 0.001) (Fig. K), mast cells ( r = 0.392, p < 0.001) (Fig. L), DCs ( r = 0.541, p < 0.001) (Fig. M), Tregs ( r = 0.521, p < 0.001) (Fig. N), aDC ( r = 0.461, p < 0.001) (Additional file: Fig. S4A), iDC ( r = 0.437, p < 0.001) (Additional file: Fig. S4B), pDC ( r = 0.343, p < 0.001) (Additional file: Fig. S4C), NK CD56 dim cells ( r = 0.556, p < 0.001) (Additional file: Fig. S4D), cytotoxic cells ( r = 0.492, p < 0.001) (Additional file: Fig. S4E), Tem ( r = 0.403, p < 0.001) (Additional file: Fig. S4F), TFH ( r = 0.417, p < 0.001) (Additional file: Fig. S4G), and Tgd ( r = 0.176, p < 0.001) (Additional file: Fig. S4H). Conversely, CHI3L1 expression demonstrated a negative correlation with the infiltration level of Th17 cells ( r = −0.121, p = 0.014) (Additional file: Fig. S4I) and CD56 bright cells ( r = −0.212, p < 0.001) (Additional file: Fig. S4J). Moreover, there was no significant correlation between the expression level of CHI3L1 and the infiltration level of T helper cells ( r = 0.077, p = 0.118) (Additional file: Fig. S4K) and Tcm ( r = 0.002, p = 0.966) (Additional file: Fig. S4L). CHI3L1 expression is associated with M2 macrophage infiltration and polarization The immune infiltration status within the TME can significantly impact patient prognosis. Previous studies have indicated a correlation between CHI3L1 overexpression and poor prognosis in BCa patients, with the strongest association observed in macrophages. To elucidate the relationship between CHI3L1 expression and macrophage immune infiltration levels, we employed both XCELL and EPIC algorithms. In the initial analysis, CHI3L1 expression exhibited a positive correlation with macrophage infiltration levels, as determined by XCELL ( r = 0.622, p = 6.59E-41) (Additional file: Fig. S5A) and EPIC ( r = 0.662, p = 6.25E-48) (Additional file: Fig. S5B). Furthermore, our analysis revealed a positive correlation between CHI3L1 expression and M2 macrophage infiltration using four distinct algorithms, including CIBERSOFT ( r = 0.295, p = 7.92E-09) (Additional file: Fig. S5C), XCELL ( r = 0.395, p = 2.89E-15) (Additional file: Fig. S5D), QUANTISEQ ( r = 0.412, p = 1.38E-16) (Additional file: Fig. S5E), and CIBERSOFT-ABS ( r = 0.666, p = 1.25E-48) (Additional file: Fig. S5F). Furthermore, our investigation indicated a significant association between CHI3L1 expression in BCa and M2 macrophage markers, including CD163 (Additional file: Fig. S5G), MRC1 (Additional file: Fig. S5H), MS4A4A (Additional file: Fig. S5I), and VSIG4 (Additional file: Fig. S5J).
A total of 760 DEGs were identified in fresh urine samples obtained from 5 BCa patients and 5 healthy individuals (Fig. A). Heatmaps show the expression of the top 50 genes out of 398 up-regulated genes and 362 down-regulated genes in urine proteomics in tumor samples (Fig. B-C) and normal samples (Fig. D-E). To explore the potential functions of these 760 DEGs, we conducted GO enrichment analysis, covering biological processes (BP), cellular components (CC), and molecular functions (MF). The analysis revealed that biological processes were mainly enriched in lymphocyte mediated immunity, cell-substrate adhesion, and urogenital system development (Fig. F). Cellular components included focal adhesion, cell-substrate junction, and immunoglobulin complex (Fig. G). Molecular functions were primarily associated with immunoglobulin receptor binding, cadherin binding, and cytokine binding (Fig. H). Subsequently, KEGG pathway enrichment analysis indicated associations with Cell adhesion molecules, Proteoglycans in cancer, and PI3K-Akt signaling pathway (Fig. I).
In this analysis, we segregated the TCGA-BLCA dataset into two cohorts: BLCA and normal control groups. A comparative differential gene expression analysis was performed between these two groups utilizing the Wilcoxon rank-sum test, with a significance threshold set at |logFC|> 0.2 and p < 0.01 to delineate differentially expressed genes (Fig. A). Thereafter, we scrutinized the GSE49240 dataset, culminating in the identification of a total of 502 differential genes pertinent to M2 macrophages, comprising 277 upregulated and 225 downregulated genes (Fig. B). A Venn diagram analysis was conducted to delineate the commonalities within the spectrum of DEGs, yielding 19 genes that are intimately associated with M2 macrophages in the context of bladder cancer: A2M, TFRC, CA2, ACO1, DBI, CHI3L1, CAT, CYBRD1, TALDO1, APEH, RFNG, RTN4R, TFPI, BIN1, KDM6B, MFAP4, ICAM2, OLR1, and THBS1 (Fig. C). In an effort to refine the selection of DEGs correlated with the progression of bladder cancer, the TCGA-BLCA dataset was stratified into high-grade (T3/4) and low-grade (T1/2) subgroups. Subsequently, three machine learning algorithms (LASSO, RFE-SVM, and RF) were applied to the DEGs for feature selection. The SVM algorithm demonstrated optimal accuracy when the variable number was 13 (Fig. D-F), and the pairwise correlation among these 19 genes was computed. The RF and LASSO methods each identified a subset of genes, with 4 and 5 genes selected respectively (Fig. G-K). Finally, the feature genes shared by LASSO, RFE-SVM, and RF algorithms were selected (Fig. L): CYBRD1, CHI3L1, and TFRC. To ascertain the independent prognostic impact of individual genes, we performed univariate COX regression and Kaplan Meier analysis (Fig. M-O) on these three genes. Consequently, we identified CHI3L1 as the pivotal key gene.
In our further analysis, we delved into the association between CHI3L1 expression and various clinical characteristics of BCa patients. Notably, we identified a significant correlation between CHI3L1 expression and age and gender. Specifically, CHI3L1 expression was markedly higher in patients aged over 70 years and in female patients (Additional file: Fig. S1A-B). However, we did not observe a significant correlation between the expression level of CHI3L1 and the smoking status of the patients (Additional file: Fig. S1C). Moreover, in the context of TNM staging, elevated CHI3L1 expression exhibited a positive correlation with T-stage and N-stage, although no such correlation was observed with M-stage (Additional file: Fig. S1D-F). Additionally, CHI3L1 expression demonstrated higher levels in non-papillary tumors, more advanced histologic staging, and more advanced pathological staging (Additional file: Fig. S1G-I).
The lentivirus was taken to decrease CHI3L1 expression in BCa cells, and the transfection efficiency was confirmed by western blot analysis (Fig. A). A CCK-8 assay was conducted to investigate whether CHI3L1 affected BCa cell viability. BCa cell viability was inhibited in both cell lines by downregulating CHI3L1 (Fig. B). Colony formation assay indicated that CHI3L1 knockdown significantly inhibited the proliferation ability of J82 and UMUC3 (Fig. C). The immunofluorescence staining revealed that the Ki67 was also decreased in the CHI3L1 deficiency group than in the control group (Additional file: Fig. S2A). Moreover, the wound healing and transwell assays further revealed that decreased CHI3L1 expression can obviously inhibit cell migration and invasion ability in J82 and UMUC3 cells (Fig. D-F). Consistent with these results, the immunofluorescence staining showed that the E-cadherin was increased after CHI3L1 downregulation (Additional file: Fig. S2B). Additionally, western blot results indicated that the E-cadherin level was significantly increased, and the N-cadherin levels were significantly decreased compared with the control group (Additional file: Fig. S2C). Meanwhile, the CCK-8 results demonstrated a significant increase in sensitivity of BCa cells to gemcitabine after downregulation of CHI3L1 (Additional file: Fig. S2D-E).
To ascertain the role of CHI3L1 in BCa development, we conducted experiments employing specific inhibitors. Initially, we utilized pentoxifylline (PTX), an FDA-approved drug recognized as a specific inhibitor of CHI3L1 . We detected and found that PTX inhibited the growth of J82 and UMUC3, with IC50s of 6.95 mM and 3.05 mM for J82 and UMUC3 respectively (Fig. A). Moreover, after PTX treatment, BCa cells gradually became elongated or small rounded, and detached (Fig. B). Subsequent experiments utilizing the CCK-8 and colony formation assay demonstrated significant inhibition of proliferation in J82 and UMUC3 cells following 48 h of PTX treatment (Fig. C-D). To assess PTX’s impact on BCa cell migration and invasion, we conducted wound healing and Transwell experiments. In the transwell assays confirmed the inhibitory effects of PTX on both BCa cell migration and invasion (Fig. E-F). Similarly, wound healing assays showed PTX treatment effectively restrained the migration of J82 and UMUC3 cells (Fig. G-H).
Currently, GEM is an important component of first-line chemotherapy for the treatment of malignancies such as BCa, but the disease remission rate for muscle-invasive bladder cancer or metastatic bladder cancer is only 49%, and it remains prone to recurrence and drug resistance. To further investigate whether CHI3L1 is associated with GEM drug sensitivity in BCa cells, we first exposed J82, UMUC3, and T24 BCa cells to different concentrations of GEM for 48 h, and then performed a CCK-8 assay to evaluate the cell activity. The IC50 of J82, UMUC3, and T24 cells were 0.55 μM, 1.11 μM, and 1.21 μM, respectively, as calculated by Graphpad (Fig. A-C). To further optimize the response of BCa cells to GEM, any possible effects of GEM in combination with PTX on cell activity were investigated. The appropriate concentration of GEM was set at half the IC50 based on the results of the CCK-8 analysis. Cells were then exposed to GEM with or without PTX for 48 h. In all three cell lines, cell activity was significantly lower in the PTX + GEM group than in the GEM group (Fig. D-F).
To unravel the potential mechanisms of CHI3L1 in tumorigenesis and progression, we performed a comprehensive analysis of proteins interacting with CHI3L1 and genes associated with CHI3L1 using STRING and GEPIA2. In the Protein–Protein Interaction (PPI) network, 50 proteins experimentally interacting with CHI3L1 were identified (Additional file: Fig. S3A). Additionally, we pinpointed the top 100 genes most positively associated with CHI3L1, with SULF1, CCN4, GFPT2, BCL2A1, CCL11, and TNFAIP6 being the top 6 genes (Additional file: Fig. S3B). By intersecting genes directly interacting with or related to CHI3L1 using Venn analysis, we identified four genes: IL10, CD163, CCL18, and POSTN (Additional file: Fig. S3C). Notably, CD163 is a marker for M2-type macrophages , and CCL18 is a member of the CC chemokine family . GO enrichment analysis using both datasets indicated that genes directly interacting or associated with CHI3L1 were primarily related to macrophage activation, mononuclear cell proliferation, and regulation of chemotaxis. KEGG pathway enrichment analysis revealed associations with PI3K-AKT signaling, NF-kappa B signaling, ECM-receptor interaction, Bladder cancer, and Chemokine signaling pathways (Additional file: Fig. S3D). To elucidate the role of CHI3L1 in BCa, TCGA-BLCA samples were stratified into high- and low-expression groups based on the median expression of CHI3L1. Subsequent differential analyses identified 3877 DEGs and volcano plots were employed to visually depict the distribution of all these genes (Fig. A). To comprehend the potential functions of these 3877 genes, we conducted GO enrichment analysis. The results revealed that BP was primarily involved in humoral immune response, phagocytosis, macrophage activation, and tumor necrosis factor production (Fig. B). CC was associated with immunoglobulin complex, vesicle lumen, basement membrane, interstitial matrix, and endocytic vesicle (Fig. C). MF included immunoglobulin receptor binding, cytokine activity, chemokine activity, complement receptor activity, and fibronectin binding (Fig. D). Subsequent analysis using KEGG enrichment revealed CHI3L1 significant associations with pathways such as Cytokine-cytokine receptor interaction, JAK-STAT signaling pathway, PI3K-AKT signaling pathway, Focal adhesion, and PPAR signaling pathway, among others (Fig. E). We then performed GSEA enrichment analysis and found that CHI3L1 was mainly associated with PI3K-AKT signaling pathway, Inflammatory Response Pathway, Apoptosis, Pathways In Cancer, and Ecm Receptor Interaction (Fig. F). In summary, the analysis suggests that CHI3L1, strongly associated with immune regulation such as macrophage activation, may contribute to the development of BCa through signaling pathways like PI3K-AKT. To further explore the function of CHI3L1 for PI3K-AKT pathway in bladder cancer cells, the protein expression levels of PI3K, p-PI3K, AKT and p-AKT were detected by a western blot assay. The knockdown of CHI3L1 remarkably induced the proein expression of p-PI3K and p-AKT (Fig. G-H), These results indicated that CHI3L1 might influence bladder cancer cell function through its regulation on the PI3K-AKT signaling pathway.
We have demonstrated that CHI3L1 knockdown significantly inhibits BCa cell proliferation, metastatic ability, and gemcitabine resistance. To further investigate the effect of CHI3L1 on the proliferative ability of BCa in vivo, we injected control and CHI3L1-stable knockdown UMUC3 BCa cells subcutaneously into BALB/c nude mice. This allowed us to construct a subcutaneous tumor-loaded nude mouse model. We subsequently observed tumor formation, as well as tumor volume and weight in both groups of nude mice (Fig. A-B). The results showed that the tumor volume was significantly reduced in the CHI3L1-stable knockdown group compared to the control group (Fig. C). Additionally, the tumor weight was significantly lower in the CHI3L1-stable knockdown group (Fig. D). Meanwhile, immunohistochemistry staining showed a decrease in CHI3L1, KI67, and p-AKT expression, and an increase in E-cadherin expression in the sh-CHI3L1 group compared to the control group (Fig. E). Collectively, our findings suggest that the decreased CHI3L1 expression leads to inactivation of AKT signaling in BCa, which impedes BCa progression.
To further explore the correlation between CHI3L1 and tumor immune infiltration, this study utilized the TIMER database to analyze immune infiltration in BCa, stratifying patients based on high and low CHI3L1 expression levels. The results indicated that BCa patients in the high CHI3L1 expression group exhibited significantly elevated infiltration levels of aDC, B cells, CD8 + T cells, cytotoxic cells, DC, eosinophils, iDC, macrophages, mast cells, neutrophils, NK CD56 dim cells, NK cells, pDC, T cells, Tem, TFH, Tgd, Th1 cells, Th2 cells, and Treg compared to BCa patients with low CHI3L1 expression. Conversely, high CHI3L1-expressing BCa patients demonstrated significantly lower infiltration of NK CD56 bright cells and Th17 cells compared to their low CHI3L1-expressing counterparts (Fig. A-B). We conducted an analysis to investigate the correlation between CHI3L1 expression levels and immune infiltration in BCa. The strongest correlation with CHI3L1 was observed in macrophages ( r = 0.47, p < 0.001) (Fig. C), aligning with our earlier findings. Additionally, CHI3L1 expression exhibited positive correlations with the infiltration levels of NK cells ( r = 0.438, p < 0.001) (Fig. D), neutrophils ( r = 0.627, p < 0.001) (Fig. E), B cells ( r = 0.558, p < 0.001) (Fig. F), T cells ( r = 0.505, p < 0.001) (Fig. G), Th1 cells ( r = 0.680, p < 0.001) (Fig. H), Th2 cells ( r = 0.346, p < 0.001) (Fig. I), CD8 + T cells ( r = 0.295, p < 0.001) (Fig. J), eosinophils ( r = 0.349, p < 0.001) (Fig. K), mast cells ( r = 0.392, p < 0.001) (Fig. L), DCs ( r = 0.541, p < 0.001) (Fig. M), Tregs ( r = 0.521, p < 0.001) (Fig. N), aDC ( r = 0.461, p < 0.001) (Additional file: Fig. S4A), iDC ( r = 0.437, p < 0.001) (Additional file: Fig. S4B), pDC ( r = 0.343, p < 0.001) (Additional file: Fig. S4C), NK CD56 dim cells ( r = 0.556, p < 0.001) (Additional file: Fig. S4D), cytotoxic cells ( r = 0.492, p < 0.001) (Additional file: Fig. S4E), Tem ( r = 0.403, p < 0.001) (Additional file: Fig. S4F), TFH ( r = 0.417, p < 0.001) (Additional file: Fig. S4G), and Tgd ( r = 0.176, p < 0.001) (Additional file: Fig. S4H). Conversely, CHI3L1 expression demonstrated a negative correlation with the infiltration level of Th17 cells ( r = −0.121, p = 0.014) (Additional file: Fig. S4I) and CD56 bright cells ( r = −0.212, p < 0.001) (Additional file: Fig. S4J). Moreover, there was no significant correlation between the expression level of CHI3L1 and the infiltration level of T helper cells ( r = 0.077, p = 0.118) (Additional file: Fig. S4K) and Tcm ( r = 0.002, p = 0.966) (Additional file: Fig. S4L).
The immune infiltration status within the TME can significantly impact patient prognosis. Previous studies have indicated a correlation between CHI3L1 overexpression and poor prognosis in BCa patients, with the strongest association observed in macrophages. To elucidate the relationship between CHI3L1 expression and macrophage immune infiltration levels, we employed both XCELL and EPIC algorithms. In the initial analysis, CHI3L1 expression exhibited a positive correlation with macrophage infiltration levels, as determined by XCELL ( r = 0.622, p = 6.59E-41) (Additional file: Fig. S5A) and EPIC ( r = 0.662, p = 6.25E-48) (Additional file: Fig. S5B). Furthermore, our analysis revealed a positive correlation between CHI3L1 expression and M2 macrophage infiltration using four distinct algorithms, including CIBERSOFT ( r = 0.295, p = 7.92E-09) (Additional file: Fig. S5C), XCELL ( r = 0.395, p = 2.89E-15) (Additional file: Fig. S5D), QUANTISEQ ( r = 0.412, p = 1.38E-16) (Additional file: Fig. S5E), and CIBERSOFT-ABS ( r = 0.666, p = 1.25E-48) (Additional file: Fig. S5F). Furthermore, our investigation indicated a significant association between CHI3L1 expression in BCa and M2 macrophage markers, including CD163 (Additional file: Fig. S5G), MRC1 (Additional file: Fig. S5H), MS4A4A (Additional file: Fig. S5I), and VSIG4 (Additional file: Fig. S5J).
The human CHI3L1 gene is located on chromosome 1q32.1 and contains 7498 base pairs and 10 exons. It is also named YKL-40 due to the presence of three N-terminal amino acid residues in the secreted form, i.e., tyrosine (Y), lysine (K), and leucine (L), with a molecular weight of 40 kDa . CHI3L1 belongs to the glycoside hydrolase 18 family, which contains chitinases capable of hydrolyzing chitin polysaccharides and chitin-like proteins that are not capable of degrading chitin and do not possess chitinase activity due to the replacement of glutamic acid by leucine at a key position on the side chain of CHI3L1 . With continuous research, it has been found that CHI3L1 is expressed in macrophages , vascular smooth muscle cells, T cells , neutrophils, as well as several types of cancer cells, such as breast cancer, osteosarcoma, ovarian cancer, lung cancer and glioblastoma . Currently, it has been demonstrated that elevated serum levels of CHI3L1 are associated with poor prognosis and shorter survival in metastatic breast cancer patients . The astrocyte paracrine factor CHI3L1 was found to promote the formation of metastatic lesions. Inhibition of CHI3L1 attenuates astrocyte-tumor cell interactions and inhibits cortical tumor growth in vivo . It has also been demonstrated that anti-CHI3L1 antibodies inhibit lung tumor growth and metastasis by inhibiting M2 polarization and thus inhibiting lung tumor growth and metastasis . Meanwhile, it has been demonstrated that elevated CHI3L1 expression is associated with poor prognosis and lymph node metastasis in BCa patients . It has also been found that CHI3L1 is associated with neutrophil infiltration in BCa . However, there are few studies on the mechanism of action of CHI3L1 in BCa, chemotherapy resistance and its relationship with macrophages. In this study, we conducted a comprehensive analysis to identify potential BCa biomarkers associated with M2 macrophages using urine proteomics and bioinformatics. Through a meticulous comparison of DEGs identified in urine proteomics, TCGA-BLCA, and GSE49240 databases via machine learning analysis, we identified three candidate genes: CYBRD1, CHI3L1, and TFRC. Further scrutiny was directed towards evaluating the expression levels of these three candidate genes in BCa and their association with patient prognosis. Moreover, high CHI3L1 expression in BCa patients exhibited a positive correlation with poorer OS. This finding suggests a potential association between CHI3L1 and the initiation and progression of BCa. Additionally, CHI3L1 expression displayed positive correlations with patient age, gender, more advanced histologic stage, and more advanced pathologic stage. To investigate the involvement of CHI3L1 in BCa, we conducted a comprehensive analysis of DEGs associated with CHI3L1 expression using GO and KEGG enrichment analysis. Our findings indicate that CHI3L1 plays a pivotal role in several biological processes, including humoral immune response, immune response-activating signal transduction, tumor necrosis factor production, and macrophage activation. KEGG enrichment analysis further elucidated that CHI3L1 function is predominantly associated with the PI3K-Akt signaling pathway, cell adhesion molecules, and the NF-kappa B signaling pathway. To substantiate the impact of CHI3L1 on bladder cancer development, experimental validations were conducted using PTX, a specific inhibitor of CHI3L1. The research findings suggest that inhibiting CHI3L1 leads to a reduction in the proliferation, migration, and invasion of BCa cells and synergistically increases the inhibitory effect of GEM on cell activity. Macrophages, integral components of the tumor microenvironment, play a crucial role in promoting tumor cell migration, invasion, stromal degradation, and angiogenesis. Both clinical and experimental studies have demonstrated that TAMs facilitate solid tumor metastasis by releasing various cytokines, including chemokines, inflammatory factors, and growth factors . For instance, macrophages can stimulate tumor angiogenesis through the secretion of IL-1, VEGF, and MMP-2 . M2-type TAMs are pivotal contributors to immunosuppression and tumor development. Multiple studies have highlighted the significant regulatory role of TAM polarization towards an M2-like phenotype in bladder cancer invasion, metastasis, and drug resistance. Li et al. discovered that Dauricine inhibits M2 polarization of macrophages by down-regulating the PI3K/Akt signaling pathway. This, in turn, reduces CHI3L1 secretion, ultimately impeding the progression of prostate cancer cells . Moreover, research has unveiled that M2 macrophages recruited by tumors play a role in promoting metastasis in gastric and breast cancers through the secretion of the CHI3L1 protein . To explore the interplay between CHI3L1 and macrophages in BCa, we employed two algorithms, XCELL and EPIC. The results demonstrated a positive correlation between CHI3L1 expression in BCa and the extent of macrophage infiltration. Concurrently, our analysis revealed a positive correlation between CHI3L1 expression in BCa and the infiltration levels of M2 macrophages, as determined by four distinct algorithms, namely CIBERSOFT, XCELL, QUANTISEQ, and CIBERSOFT-ABS. Furthermore, our investigation found a significant correlation between CHI3L1 expression in BCa and M2 macrophage markers, including CD163, MRC1, MS4A4A, and VSIG4. These findings suggest that the CHI3L1 protein, secreted by M2 macrophages, may play a role in promoting the progression and metastasis of BCa through pathways such as PI3K-Akt. In summary, we found that CHI3L1 was highly expressed in BCa and correlated with poor prognosis. It was confirmed that CHI3L1 affected the proliferation, invasion, migration, and Drug sensitivity to GEM of BCa cells. CHI3L1 regulates bladder cancer progression through the PI3K-AKT signaling pathway. CHI3L1 may be a key factor for M2 macrophages to promote BCa progression. In addition, PTX can be considered as an adjuvant therapeutic agent for BCa treatment for further studies.
Additional file 1: Fig. S1. Associations between CHI3L1 expression and different clinical characteristics in BLCA. (A) Age. (B) Gender. (C) Smoker. (D) Pathologic T stage. (E) Pathologic N stage. (F) Pathologic M stage. (G) Subtype. (H) Histologic grade. (I) Pathologic stage. * p < 0.05, ** p < 0.01, *** p < 0.001. Additional file 2: Fig. S2. Knockdown CHI3L1 inhibits proliferation, migration, invasion, and Gemcitabine resistance of BCa cells in vitro. (A) Immunofluorescence determination of KI67 expression in J82 and UMUC3 cells after transfection. (B) Immunofluorescence determination of E-cadherin expression in J82 and UMUC3 cells after transfection. (C) Western blot determination of E-cadherin and N-cadherin expression in J82 and UMUC3 cells after transfection. (D-E) Cell viability of J82 and UMUC3 cells after treatment with appropriate concentrations of GEM for 48 h after transfection. Additional file 3: Fig. S3. CHI3L1 functional clustering and interaction network analysis of related genes. (A) PPI network of 50 proteins interacting with CHI3L1. (B) Top 6 genes positively associated with CHI3L1. (C) Venn diagram of the CHI3L1-related genes and interacting genes. (D) GO term and KEGG pathway analyses of the CHI3L1-related genes and interacting genes. Additional file 4: Fig. S4. Correlation between CHI3L1 expression levels and immune infiltration in BCa. (A) aDC, (B) iDC, (C) pDC, (D) NK CD56 dim cells, (E) cytotoxic cells, (F) Tem, (G) TFH, (H) Tgd, (I) Th17 cells, (J) CD56 bright cells, (K) T helper cells, (L) Tcm. Additional file 5: Fig. S5. The connection among CHI3L1 expression with macrophage infiltration and gene markers of M2 macrophages. (A, B) Correlation between CHI3L1 expression and macrophage infiltration levels using two algorithms: XCELL and EPIC. (C-F) Correlation between CHI3L1 expression and M2 macrophage infiltration levels using four algorithms: CIBERSOFT, XCELL, QUANTISEQ, and CIBERSOFT-ABS. (G-J) Correlation among CHI3L1 expression with CD163, MRC1, MS4A4A, and VSIG4. Additional file 6.
|
Attitude and behaviour of Dutch Otorhinolaryngologists to Evidence Based Medicine | e014d2de-b600-4237-be45-b98be6d08a33 | 6936769 | Otolaryngology[mh] | Evidence Based Medicine (EBM) is the foundation modern clinical care is built on. In 1996 the current definition of EBM was defined by Sackett et al. as: “the conscientious, judicious and explicit use of best available evidence, integrating with clinical judgment and patient values to provide the best individual care for the patient”, which was based on philosophical ideas originating from mid-19 th century Paris. EBM encourages clinicians to look at individual patients’ needs and to track down the best available evidence to answer individual clinical questions. The importance of EBM lies in its ambition to create optimal patient care. Literature reports that 73–84% of patients receive evidence based care.[ – ] McGlynn (2003) reported that 11% of patients received care not in accordance with the latest evidence and potentially harmful care. Implementation of EBM can be achieved by spreading the outcomes of clinical studies in clinical journals, at conferences, and by the creation of evidence based guidelines Nonetheless, a survey assessing guideline adherence in Otorhinolaryngology showed a nonadherence of 45% to guidelines, most probably due to guidelines that do not provide strict recommendations . Even with improving modern techniques to disseminate evidence through the internet (UpToDate, PIER, Clinical Evidence), implementation in clinical practice continues to be difficult. To overcome the disagreement between science and practice, Evidence Based Practice (EBP) was developed. By adhering to 5 steps - Ask , Access , Appraise , Apply and Assess , a physician is assisted in integrating scientific evidence into daily practice. It is estimated that 2 questions are raised for every 3 patients a surgeon sees. However, surgeons only search for answers in 50% of the questions. We believe this might be caused by barriers experienced by surgeons to practice EBM Barriers differ among different types of health care providers, e.g. general practitioners versus secondary care or consultants versus residents.[ , , ] Practicing Evidence Based Medicine requires specific competencies including knowledge and skills. Besides, individual attitude and behaviour towards EBM are of prime importance to properly practice EBM. Research about the knowledge of, skills in, and attitude and behaviour towards EBM was performed in several medical fields.[ , – ] To the best of our knowledge no research of this kind was performed in the field of otorhinolaryngology. Therefore, to gain insight in possible improvements in EBM adherence, we assessed the attitude and behaviour of Dutch Ear- Nose & Throat (ENT) surgeons towards EBM.
Study population All currently practicing ENT surgeons (n = 501) and ENT residents (n = 106) in the Netherlands, who were registered as a member of the Dutch Society of Otorhinolaryngology—Head and Neck Surgery (Nederlandse Vereniging voor Keel-Neus-Oorheelkunde en Heelkunde van het Hoofd-Halsgebied) at 26-03-2018, were included. There were no exclusion criteria. Informed consent was considered provided if a participant filled out the questionnaire. General data on Dutch Otorhinolaryngologists were provided by the Dutch Society of Otorhinolaryngology—Head and Neck Surgery, and extracted from their website at the 16 th of April 2018. The Medical Ethical Research Committee of the University Medical Centre Utrecht (UMCU) judged that the Medical Research Involving Human Subjects Act does not apply for the study (February 28 th 2018). Questionnaires The first part of the questionnaire consisted of three sections: (1) personal characteristics, (2) attitude towards EBM and (3) behaviour towards EBM. The complete questionnaire can be found in the supporting information.( Questionnaire English and Questionnaire Dutch) Personal characteristics: i.e. sex, year of birth, year of registry within the database of Dutch Society of Otorhinolaryngology—Head and Neck Surgery, PhD fulfilment, and type of employment. A self-report question about EBM attitude was asked, using a 5-point Likert scale. 1: very unimportant, 5: very important. EBM attitude. Attitude was defined as the mind-set of the responders as to the principles of EBM. It was assessed using the validated McColl Questionnaire (1998) which consists of seven questions and which was forward-backward translated into Dutch. One question was assessed with a scale ranging from 0% (very negative) to 100% (very positive). The other six questions were assessed with a scale ranging from 0% (very positive) to 100% (very negative). These scores were inverted prior to statistical analysis. EBM behaviour. EBM behaviour was assessed in two ways. First, we investigated the barriers to apply EBM based on a validated questionnaire consisting of 19 statements. The questionnaire assesses questions on a 5-point Likert scale (1 = totally disagree, 5 = totally agree). The original validated questionnaire was adjusted. One question was removed as we considered it irrelevant to the field of otorhinolaryngology. If necessary statements were minimally adjusted to adapt to the field of otorhinolaryngology. For one question ( Q17 my residents and interns motivate me to work according to EBM ) one option was added to the scale (6 = not applicable). In the second part of the questionnaire we examined information seeking behaviour, based on a not-validated Dutch questionnaire that was developed for general practitioner trainees (unpublished data). This questionnaire consisted of 7 questions encompassing (1) access and usage of scientific information and (2) factors contributing to clinical decision making. If necessary questions were minimally adjusted to fit otorhinolaryngology. Logistics A questionnaire was distributed at the 27 th of March 2018 to the members by the Dutch society of Otolaryngology—Head and Neck surgery through an email alert. Information on the study and an URL link to the questionnaire were provided in the email. The questionnaire was administered in an electronic questionnaire system: NetQuestionnaires. To maximize response rate, several actions were taken. First, the Dutch society of Otolaryngology- Head and Neck surgery sent a reminder email 2 weeks after the initial email. Second, a reminder email was sent after 4 weeks directly to ENT surgeons and residents. Third, to increase awareness, a reminder was added to the PowerPoint presentation of 1 colleague of the otorhinolaryngology department of the UMCU, at the biannual congress for Dutch ENT surgeons. Respondents were able to fill out the questionnaire till the 1st of June 2018. Outcomes Primary outcomes were EBM attitude and behaviour. The secondary outcome was information seeking behaviour. The outcomes were measured using the questionnaires as described under methods. Statistical analysis After the questionnaires were completed, the answers were automatically saved in NetQuestionnaires. Data of completed questionnaires were exported to an SPSS file. All data were analysed in SPSS version 21.0. We visually checked data for normality and performed Kolmogorov-Smirnov and Shapiro-Wilk tests of normality. Normally distributed data was presented as means with standard deviations. For not normally distributed data medians and quartiles were calculated. Mann-Whitney U tests were used to compare different groups. Chi-square tests were used to compare difference is categorical data. For question 2, part 3, the average was analysed, if participants answered with more than one number.
All currently practicing ENT surgeons (n = 501) and ENT residents (n = 106) in the Netherlands, who were registered as a member of the Dutch Society of Otorhinolaryngology—Head and Neck Surgery (Nederlandse Vereniging voor Keel-Neus-Oorheelkunde en Heelkunde van het Hoofd-Halsgebied) at 26-03-2018, were included. There were no exclusion criteria. Informed consent was considered provided if a participant filled out the questionnaire. General data on Dutch Otorhinolaryngologists were provided by the Dutch Society of Otorhinolaryngology—Head and Neck Surgery, and extracted from their website at the 16 th of April 2018. The Medical Ethical Research Committee of the University Medical Centre Utrecht (UMCU) judged that the Medical Research Involving Human Subjects Act does not apply for the study (February 28 th 2018).
The first part of the questionnaire consisted of three sections: (1) personal characteristics, (2) attitude towards EBM and (3) behaviour towards EBM. The complete questionnaire can be found in the supporting information.( Questionnaire English and Questionnaire Dutch) Personal characteristics: i.e. sex, year of birth, year of registry within the database of Dutch Society of Otorhinolaryngology—Head and Neck Surgery, PhD fulfilment, and type of employment. A self-report question about EBM attitude was asked, using a 5-point Likert scale. 1: very unimportant, 5: very important. EBM attitude. Attitude was defined as the mind-set of the responders as to the principles of EBM. It was assessed using the validated McColl Questionnaire (1998) which consists of seven questions and which was forward-backward translated into Dutch. One question was assessed with a scale ranging from 0% (very negative) to 100% (very positive). The other six questions were assessed with a scale ranging from 0% (very positive) to 100% (very negative). These scores were inverted prior to statistical analysis. EBM behaviour. EBM behaviour was assessed in two ways. First, we investigated the barriers to apply EBM based on a validated questionnaire consisting of 19 statements. The questionnaire assesses questions on a 5-point Likert scale (1 = totally disagree, 5 = totally agree). The original validated questionnaire was adjusted. One question was removed as we considered it irrelevant to the field of otorhinolaryngology. If necessary statements were minimally adjusted to adapt to the field of otorhinolaryngology. For one question ( Q17 my residents and interns motivate me to work according to EBM ) one option was added to the scale (6 = not applicable). In the second part of the questionnaire we examined information seeking behaviour, based on a not-validated Dutch questionnaire that was developed for general practitioner trainees (unpublished data). This questionnaire consisted of 7 questions encompassing (1) access and usage of scientific information and (2) factors contributing to clinical decision making. If necessary questions were minimally adjusted to fit otorhinolaryngology.
A questionnaire was distributed at the 27 th of March 2018 to the members by the Dutch society of Otolaryngology—Head and Neck surgery through an email alert. Information on the study and an URL link to the questionnaire were provided in the email. The questionnaire was administered in an electronic questionnaire system: NetQuestionnaires. To maximize response rate, several actions were taken. First, the Dutch society of Otolaryngology- Head and Neck surgery sent a reminder email 2 weeks after the initial email. Second, a reminder email was sent after 4 weeks directly to ENT surgeons and residents. Third, to increase awareness, a reminder was added to the PowerPoint presentation of 1 colleague of the otorhinolaryngology department of the UMCU, at the biannual congress for Dutch ENT surgeons. Respondents were able to fill out the questionnaire till the 1st of June 2018.
Primary outcomes were EBM attitude and behaviour. The secondary outcome was information seeking behaviour. The outcomes were measured using the questionnaires as described under methods.
After the questionnaires were completed, the answers were automatically saved in NetQuestionnaires. Data of completed questionnaires were exported to an SPSS file. All data were analysed in SPSS version 21.0. We visually checked data for normality and performed Kolmogorov-Smirnov and Shapiro-Wilk tests of normality. Normally distributed data was presented as means with standard deviations. For not normally distributed data medians and quartiles were calculated. Mann-Whitney U tests were used to compare different groups. Chi-square tests were used to compare difference is categorical data. For question 2, part 3, the average was analysed, if participants answered with more than one number.
Of the 501 ENT surgeons and 106 ENT residents, 103 (17%) respondents started the questionnaire. 58 (12%) ENT surgeons and 10 residents (9%) completed the questionnaire (total n = 68, 11%), only data from respondents that completed the questionnaire was analysed. Characteristics of (non)responders (sex, year of birth, registry time and type of employment) are presented in . Baseline characteristics of responders and non-responders were similar ( ). EBM attitude The overall median of the McColl questionnaire was 50 (Interquartile range (IQR) 35) ( ). The outcome of the self-rating attitude question towards EBM was high (median 4, IQR 0, on a 5 point-Likert scale, 1: very unimportant, 5: very important). We found no significant differences in the single self-reported attitude when comparing (1) ENT residents to ENT surgeons and (2) ENT surgeons with different registry time. However, comparing the outcome of the McColl questionnaire, one significant difference (p = 0.023) was found in question 2: ‘How would you describe the attitude of most of your colleagues towards EBM’ between residents (median 30, IQR 12) and ENT surgeons (median 51, IQR 33). No significant differences were found between ENT surgeons with different registry time in the McColl questionnaire. EBM behaviour Barriers The most important barriers for EBM were: when busy , searching for clinical evidence is not a priority to me (median 4, IQR 2), and the time I have per patient is insufficient to also search for answers to my questions (according to the principles of EBM) (median 4, IQR 1) ( ). For the question: ‘During consultations, I have sufficient time to work according to the principles of EBM’, a significant difference (0.047) was found between ENT residents (median 2, IQR 1) and ENT surgeons (median 2.5, IQR 2). We found no significant differences when comparing ENT surgeons with different registry time. Information seeking behaviour Ninety percent of respondents performed or let someone perform literature searches in the last month before filling out the questionnaire; these respondents performed a median of four literature searches (median, IQR 4). This search influenced clinical practice in half of the times (median 50, IQR 43). Of all respondents, 74% had some form of EBM training. 88% of respondents have access to full-text articles at work—in the consultation room ( ). No significant differences were found between residents and ENT surgeons, or within ENT surgeons with different registry time. Half of respondents had performed a literature search in the two weeks before participating in the questionnaire. Of those, 50% often read parts of the article (median 4.0, IQR 1.0), while 3% always read the entire article (median 3.0, IQR 1.0) ( ). No differences were found between residents and ENT surgeons, and between ENT surgeons with different registry time. Reported factors influencing clinical decision-making were diverse. The most important factors were the respondents’ own preference (median 4.0, IQR 1.0), the patient’s prognosis (median 4.0, IQR 1.0), the patient’s condition (median 4.0, IQR 1.0), the patient’s preference (median 4.0, IQR 1.0), and the ENT surgeons’ gut feeling (median 4.0, IQR 0.0) ( ). A significant difference was found between ENT residents (median 4.0, IQR 1.0) and ENT surgeons (3.0, 0.0) in the factor ‘my colleague’s preference’ (p = 0.022). No significant differences were found between ENT surgeons with different registry time. National Guidelines and PubMed/Embase were used often by most surgeons ( ). No significant differences were found between residents and ENT surgeons. ENT surgeons registered < 10 years used Pubmed / Embase more often (median 6, IQR 1) than their older colleagues registered > 30 years (median 4, IQR 0) (p = 0.043). ENT surgeons registered < 10 years used UpToDate (median 4,0, IQR 2) more often than their colleagues registered 20–30 years (median 3, IQR 3) (p = 0.03).
The overall median of the McColl questionnaire was 50 (Interquartile range (IQR) 35) ( ). The outcome of the self-rating attitude question towards EBM was high (median 4, IQR 0, on a 5 point-Likert scale, 1: very unimportant, 5: very important). We found no significant differences in the single self-reported attitude when comparing (1) ENT residents to ENT surgeons and (2) ENT surgeons with different registry time. However, comparing the outcome of the McColl questionnaire, one significant difference (p = 0.023) was found in question 2: ‘How would you describe the attitude of most of your colleagues towards EBM’ between residents (median 30, IQR 12) and ENT surgeons (median 51, IQR 33). No significant differences were found between ENT surgeons with different registry time in the McColl questionnaire.
Barriers The most important barriers for EBM were: when busy , searching for clinical evidence is not a priority to me (median 4, IQR 2), and the time I have per patient is insufficient to also search for answers to my questions (according to the principles of EBM) (median 4, IQR 1) ( ). For the question: ‘During consultations, I have sufficient time to work according to the principles of EBM’, a significant difference (0.047) was found between ENT residents (median 2, IQR 1) and ENT surgeons (median 2.5, IQR 2). We found no significant differences when comparing ENT surgeons with different registry time.
The most important barriers for EBM were: when busy , searching for clinical evidence is not a priority to me (median 4, IQR 2), and the time I have per patient is insufficient to also search for answers to my questions (according to the principles of EBM) (median 4, IQR 1) ( ). For the question: ‘During consultations, I have sufficient time to work according to the principles of EBM’, a significant difference (0.047) was found between ENT residents (median 2, IQR 1) and ENT surgeons (median 2.5, IQR 2). We found no significant differences when comparing ENT surgeons with different registry time.
Ninety percent of respondents performed or let someone perform literature searches in the last month before filling out the questionnaire; these respondents performed a median of four literature searches (median, IQR 4). This search influenced clinical practice in half of the times (median 50, IQR 43). Of all respondents, 74% had some form of EBM training. 88% of respondents have access to full-text articles at work—in the consultation room ( ). No significant differences were found between residents and ENT surgeons, or within ENT surgeons with different registry time. Half of respondents had performed a literature search in the two weeks before participating in the questionnaire. Of those, 50% often read parts of the article (median 4.0, IQR 1.0), while 3% always read the entire article (median 3.0, IQR 1.0) ( ). No differences were found between residents and ENT surgeons, and between ENT surgeons with different registry time. Reported factors influencing clinical decision-making were diverse. The most important factors were the respondents’ own preference (median 4.0, IQR 1.0), the patient’s prognosis (median 4.0, IQR 1.0), the patient’s condition (median 4.0, IQR 1.0), the patient’s preference (median 4.0, IQR 1.0), and the ENT surgeons’ gut feeling (median 4.0, IQR 0.0) ( ). A significant difference was found between ENT residents (median 4.0, IQR 1.0) and ENT surgeons (3.0, 0.0) in the factor ‘my colleague’s preference’ (p = 0.022). No significant differences were found between ENT surgeons with different registry time. National Guidelines and PubMed/Embase were used often by most surgeons ( ). No significant differences were found between residents and ENT surgeons. ENT surgeons registered < 10 years used Pubmed / Embase more often (median 6, IQR 1) than their older colleagues registered > 30 years (median 4, IQR 0) (p = 0.043). ENT surgeons registered < 10 years used UpToDate (median 4,0, IQR 2) more often than their colleagues registered 20–30 years (median 3, IQR 3) (p = 0.03).
In this study, we investigated the attitude and behaviour of Dutch ENT surgeons towards EBM. We noticed an overall moderately positive attitude towards EBM. We identified several barriers for practicing EBM, with limited time as main barrier. Limited time in the outpatient clinic is a more important barrier for residents to practice EBM compared to ENT surgeons. By evaluating information seeking behaviour we identified the respondents’ own preference and gut feeling as the main contributing factors in their clinical decision making. Even though working according to the principles of EBM (self-rated attitude) was considered ‘important’, the attitude towards EBM by the McColl as a validated multi-item questionnaire turned out to be moderate. The scores on the McColl test (median 50) are comparable to the outcome of a survey in Dutch general practitioners (mean 56–62.8) and Dutch general surgeons (individual answers ranging between 44 and 78).[ , , ] Several papers compare the mean McColl score to a single self-reported attitude and find significant overestimation of self-reported EBM attitude compared to the outcome of the McColl questionnaire. We believe however, that the variety in questions of McColl is too high to justify direct comparison with self-reported attitude. It is interesting to compare self-reported attitude (median 4.0, important) to McColl question 4 ( what percentage of your clinical practice is currently evidence base ; median 53). This indicates that respondents understand the importance of EBM, but do not always practice it. In accordance with literature we found that limited time is an important barrier to practice EBM. In clinical care time is scarce. [ , , ] Dutch medical specialists spend 40% of their time on administration. This administrative pressure might be of influence on the largest barrier: limited time. If the administrative load would be reduced, this ‘new extra’ time might be spent on practicing EBM. Residents score a statistically significant higher median score on the question: During outpatient clinic consultations , I have sufficient time to work according to the principles of EBM than ENT surgeons. This might be explained by factors related to their experience, such as clinical decision making. However we cannot confirm this in our data. We found that access to full text-files from different databases differs between various locations. Access to full-text files also depends on the subscriptions to scientific magazines of the different hospitals or individuals. This, again, underlines the importance of the need for a movement towards open access publication. Pubmed / Embase and ENT guidelines are the most popular databases used by the participants of our study. However, an earlier study in the Netherlands, showed that 45% of all Dutch ENT surgeons showed nonadherence to these guidelines. Even though respondents consider the national guidelines as a popular database based on our study, they do not always adhere to their advices. This fits the outcome of our study of how ENT surgeons make clinical decisions in which ‘retrieving evidence’ scored relatively low compared to e.g. ‘gut feeling’ or ‘personal preferences.’ Some methodological issues need to be addressed. The response rate of only 11% is a limitation to our study. This might mean that some differences might not have been verified, due to reduced statistical power. The recruitment through e-mail instead of postal services might be related to the limited response rate. Also the electronic questionnaire system was technically not accessible for mobile Apple products. Another explanation might be that the respondents are saturated by the amount of questionnaires they might receive. One could argue that response bias is suspected by the differences in characteristics of the respondents. However as seen in , our study participants are representative of the complete population of Dutch ENT surgeons. ENT surgeons registered less than ten years are over represented in our study. This might indicate a more positive attitude towards EBM, compared to ENT surgeons registered longer. ‘Younger’ surgeons might have had more experience with EBM in university and during their residency, because of the creation of EBM’s definition in 1996. Surprisingly, our results do not show many differences in attitude and behaviour between groups of ENT surgeons with different decades of registration. This raises questions to whether the ‘extra’ education younger surgeons received are of major influence on EBP, or that older surgeons have actively retrained themselves. Another question raised is whether the attitude and behaviour on EBM is as important a factor on the actual practice of EBM, assuming that EBM attitude and behaviour have improved over the years. According to Chapman et al. there is little change in the amount of patients that receive evidence-based care in internal medicine, comparing data from 1995 and 2013. In 1995, 82% of internal medicine patients received evidence-based treatment, compared to 84% in 2013 . To fully comprehend the extend of the influence of EBM attitude and behaviour in the Dutch ENT care; one first needs to perform an audit on the amount of evidence based care in Dutch ENT care. To our knowledge no study to this extend has been performed. Future research should investigate how to solve the experienced barriers and the effect on practicing EBM. EBM competency is not limited to attitude and behaviour, but does also entail knowledge and skill. In order to fully comprehend EBM competency in Dutch ENT surgeons, research assessing knowledge and skill would be additive to our current study. Also, an educational intervention (face-to-face meetings, clinically integrated teaching) could improve EBM attitude and thereby indirectly influence EBM behaviour. In conclusion, Dutch ENT surgeons and residents scored moderately positive on the McColl questionnaire, assessing attitude. The main barriers they experience are time related. ENT surgeons use their own preference and gut-feeling most in making a clinical decision.
S1 File Questionnaire. (DOCX) Click here for additional data file. S2 File Questionnaire. (DOCX) Click here for additional data file. S1 STROBE STROBE 2007 (v4) statement—Checklist of items that should be included in reports of cross-sectional studies . (DOCX) Click here for additional data file. S1 Dataset (SAV) Click here for additional data file.
|
Apexification of an Endodontically Failed Permanent Tooth with an Open Apex: A Case Report with Histologic Findings | 90e08eeb-ae98-4d85-8bc8-c44d19bc0076 | 11857209 | Dentistry[mh] | Traumatic injuries to permanent teeth may result in damage to the periodontium, adjacent bone, and the neurovascular supply of the pulp. The outcome of the compromised pulp will be dictated by the natural balance between cellular ingrowth and bacterial infiltration, resulting in either sterile necrosis, infection-induced necrosis, revascularization, or regeneration of the injured pulp . A significant consequence of developing pulp necrosis in a traumatized immature tooth is the cessation of root growth. This occurrence will result in thin, fragile dentinal walls, complicating appropriate debridement and optimal apical sealing with conventional endodontic treatment procedures . The management of such cases is considered to be challenging for the dental professionals, necessitating different approaches. Traditionally, the apexification procedure served as a treatment modality to either induce the formation of an apical barrier or continue the development of an immature apex . For an extended period of time, apexification entails the application of calcium hydroxide (Ca[OH] 2 ) paste to achieve root-end closure, which was subsequently followed by root canal therapy . This long-term therapy presents several disadvantages, such as challenges in patient follow-up, inconsistency in process of apical closure, and compromised tooth structure, which increases the risk of root fracture . Subsequently, mineral trioxide aggregate (MTA), a calcium silicate-based hydrophilic cement, was introduced to the area of endodontics by Torabinejad and colleagues. This material demonstrated biocompatibility, induced odontoblastic development, exhibited antibacterial properties, possessed low solubility, and expanded upon setting; hence, MTA emerged as the preferred material for apexification by facilitating the placement of an artificial apical plug to encourage apical-end closure . Nevertheless, the MTA possesses hydrophilic characteristics that necessitate moisture for the setting process, along with prolonged setting times extended up to 3 h and handling challenges, prompting the exploration of alternate materials . Subsequent members of the calcium silicate-based materials were introduced to address these issues, including Biodentine ™ (Septodont, Saint-Maur-des-Fosses, France), iRoot BP Plus (Innovative BioCeramix, Vancouver, BC, Canada), TotalFill ® BC RRM ™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland), among various other brands. These materials have decreased the setting time to an average of 9–12 min, hence eliminating the two-step obturation procedure . Consequently, such materials were utilized in apexification situations. Regenerative endodontic treatment (RET) is a treatment modality that has been implemented in recent years to address the condition of properly selected cases of immature permanent teeth with necrotic pulp. This treatment aims to revitalize the damaged tissues within the canal space and facilitate the maturation of the root as well as thickening the dentinal walls by hard tissue deposition . RET is founded on a tissue bioengineering paradigm that incorporates four critical components: stem cells, scaffolds, bioactive growth factors, and disinfection, to achieve successful outcomes . Despite the fact that RET was regarded an alternative treatment option for an infected immature tooth, numerous studies demonstrated a lack of consistency in the growth of root lengthening, thickening, and apical closure . Apexification is a well-established treatment that has been shown to have favorable outcomes and consistent results, as evidenced by several clinical studies and case reports. The primary radiographic outcomes seen are the resolution of apical radiolucency, development of an apical barrier, and apical closure . Histological studies of apexification procedures in human and animal models demonstrated the formation of newly mineralized tissue above the apical foramen, defined as either bone-like tissue, cementum-like tissue, or osteodentin tissue . To our knowledge, there is limited histological evidence supporting the apexification treatment of an endodontically failed tooth. The present case describes the successful clinical and histological observations of an apexification procedure for an endodontically failed tooth with an open apex. A 24-year-old Caucasian female patient was referred to the Department of Endodontics at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia, to assess the right maxillary central incisor. The patient’s chief complaint was the presence of mild-to-moderate pain during biting and discoloration on her upper front teeth. The patient had a history of trauma to the anterior maxillary region 10 years ago, during which she underwent root canal treatment at a private clinic. The patient has no history of any systemic disease, and according to the American Society of Anesthesiologists (ASA) classification, she is class ASA I. A clinical examination of the right maxillary central incisor (#11) revealed a defective tooth-colored restoration and mild crown discoloration compared to the adjacent teeth ( A). The pulp testing, which involved applying Endo-Frost (Coltène/Whaledent GmbH+ Co. KG, Langenau, Germany) with a cotton pellet and using an electric pulp tester (Analytic Technology, Redmond, WA, USA), revealed no response. Percussion and palpation recorded mild tenderness and pain; the tooth showed no mobility, and periodontal probing depths were within normal limits. The preoperative periapical radiograph revealed an inadequate root canal filling that was short of the apex, accompanied by defective tooth-colored restoration ( B). The apical region of the root exhibited a short root with a blunderbuss canal and an open apex, along with slight apical radiolucency. Based on clinical and radiographic findings, the endodontic diagnosis revealed a previously treated tooth with symptomatic apical periodontitis. Subsequent to a thorough discussion of the treatment options with the patient, the options presented include: an endodontic approach followed by the placement of a post/core and crown, extraction with or without subsequent replacement, or the option of no treatment. Based on the clinical assessment, the tooth has a favorable prognosis; thus, the indicated treatment option involves an endodontic treatment, succeeded by the placement of a post-core-crown restoration. The endodontic treatment options and procedures were explained to the patient, including non-surgical root canal retreatment with either regenerative endodontic treatment (RET), conventional calcium hydroxide apexification, or one-step apexification. Following consultation with the prosthodontist, regenerative endodontic treatment was excluded due to the necessity of a post in the root canal space to support the ceramic crown; thus, one-step apexification was selected. Informed written consent was obtained from the patient to perform a one-step apexification procedure after engaging in a discussion regarding the treatment of the tooth. There was no ethical conflict. 2.1. First Treatment Visit The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). 2.2. Second Treatment Visit At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). 2.3. Follow-Up Visit Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . 2.4. Histologic Procedure Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. 2.5. Histologic Observation The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). This case report is presented in which mineralized apical tissue formation occurred in an endodontically failed maxillary central incisor with an open apex after the apexification procedure. The techniques used for managing the open apex in necrotic teeth, with or without apical periodontitis, went through many treatment phases, including conventional Ca(OH) 2 apexification, artificial apical plug apexification, and regenerative endodontic treatment, each exhibiting various advantages as well as drawbacks. Conventional apexification using Ca(OH) 2 has demonstrated reliable outcomes; however, several drawbacks have been noted, including the extended duration of treatment and the requirement for periodic replacement of the intracanal dressing, necessitating multiple visits and patient compliance. Additionally, there is an elevated risk of root fracture due to the prolonged presence of Ca(OH) 2 within the root canal, as well as an increased likelihood of recontamination of the root canal system due to failures in the temporary seal . To address these limitations, the artificial apical plug approach, referred to as one-step apexification, has been developed for managing such conditions. Nonetheless, this approach lacks the capacity to promote the thickening of canal walls and/or continued root growth . The RET approach, unlike to the apexification procedure, promotes the growth of immature roots, involving root thickness and lengthening, apical closure, and potential regeneration of tooth vitality . RET entails specific clinical considerations that must be adhered to in order to select the appropriate case. It is essential to consider patient and parental compliance, particularly given that the majority of cases involve young patients. Furthermore, it should be noted that the tooth does not necessitate the placement of a post or core within the pulp space, and the patient does not exhibit any allergies to the medications and antibiotics utilized in this procedure. While RET has indicated encouraging results, various limitations and adverse outcomes have been identified. This encompasses an extended treatment duration, numerous appointments for disinfection, variable histological results, possibility of crown discoloration, and the potential for treatment failure . In this particular case, RET was excluded since the tooth was being designated for a post and core procedure. Consequently, one-step apexification has been selected as a treatment option. The success of the apexification procedure depends on the deposition of the calcified barrier, which is controlled by the differentiation of the stem cells from the apical papilla (SCAP) that migrate from the healing periradicular tissues . The molecular foundation of the apexification healing process involves various growth factors, cytokines, transcription factors, and bone morphogenetic proteins (BMPs) that facilitate the differentiation of SCAP into dentin-like, cementum-like, bone-like tissues, and/or organic matrix via specific signaling pathways . The SCAP, derived from neural crest mesenchymal stem cells, are a distinct population with significant proliferative capacity, capable of self-renewal and exhibiting minimal immunogenicity . Furthermore, the SCAP are capable of remaining viable in an infected immature permanent tooth with apical periodontitis, hence they are regarded as an essential biological source for the formation of the pulp-dentin complex and the continuing process of root development . Prior histological studies indicated a variable response of apical tissue to the apexification procedure. An animal study conducted by Ham et al. demonstrated periapical healing and the formation of new calcified tissue, recognized as bone-like, cementum-like tissue, or osteodentin, at root apex of the infected, immature teeth . An additional animal study by Palma et al. indicated that the developed apical barrier predominantly included cellular cementum encircled by periodontal ligament in most teeth treated with MTA apexification . Yang et al. showed that the formed calcified barrier composed of immature hard tissue, connective tissue, and bone developed following calcium hydroxide apexification treatment of an immature human premolar tooth . In this study, the histologic evaluation revealed the formation of an apical calcified barrier formed at the root apex, which was primarily composed of dentin-like tissue and cementum-like tissue. The dentin-like tissue located adjacent to the apical plug, distinguished by the presence of dentinal tubule structures. Subsequently, the incremental layers of cementum-like tissue were identified, possibly representing acellular cementum tissue. Furthermore, regions of connective tissue exhibiting distinct collagen fibers were noted, along with connective tissue containing calcified patches. We are unable to correlate our findings with the published data, which exhibit considerable variability in the type of newly formed tissue, likely attributable to differing study standards; some employed animal jaw models while others examined human teeth, alongside variations in treatment provided prior to histological assessment. Additionally, to the best of the author’s knowledge, this is the first histological study of an endodontically failed tooth that underwent successful apexification treatment. The objective assessment of the calcified bridge enables clinicians to ascertain the effectiveness of the formed bridge in sealing the apex and supporting periapical healing. The specific characteristics of the calcified bridge, including size, dimension, and density, can be assessed using radiographic imaging techniques such as periapical radiography or cone beam computed tomography. The radiograph in this investigation indicated a radiopaque structure at the apex of the root canal, consistent with a mineralized barrier. The calcified bridge exhibits adequate dimensions, measuring approximately 2 mm in width and 3.5 mm in length. The density and radiographic characteristics indicate sufficient mineralization and closure of the apical foramen. These findings are consistent with previous studies reporting the formation of calcified barriers during apexification procedures . Numerous biological factors that contribute to the failure of endodontic treatment have been identified. Nevertheless, the most prominent cause of failure is the persistence or regrowth of intraradicular infection . The disinfection of root canal system in endodontically failed teeth is of great concern and may provide obstacles when managing an infected immature tooth with thin dentin walls when compared to their matured counterparts . Evidence indicated that the use of Ca(OH) 2 medicament in MTA apexification treatments considerably promoted periodontal tissue repair and regeneration. The majority of reported cases of apexification procedures, including the current report, were conducted through two clinical sessions, during which Ca(OH) 2 was applied as an intracanal medicament . The selection of material for the apical plug has a significant impact in the apexification outcomes. It must exhibit superior biocompatibility, facilitate stem cell migration and differentiation, possess antimicrobial properties, remain insoluble, be user-friendly, and not induce discoloration . In addition to Ca(OH) 2 and MTA materials, contemporary literature supports the use of calcium silicate bioceramic materials for apical barrier formation . Interestingly, long-term prognostic studies demonstrated that apexification had high survival rates, irrespective of the type of bioactive material employed. High survival rates of Ca(OH) 2 apexification have been reported to reach 86%, with an average follow-up duration of five years . A recent long-term survival study of an immature traumatized incisors, indicated a median survival rate of 10 years for Ca(OH) 2 apexification and 16 years with MTA apexification . A retrospective study with an average follow-up duration of 3.3 years revealed that 86.3% of teeth treated with Biodentine ™ as an apical plug exhibited complete healing or shown symptoms of healing . A critical consideration in the treatment of teeth with wide-open apices is the avoidance of periapical extrusion of the apical plug filling material into the periradicular tissue. The excessive filling or extension of the apical filling material has been demonstrated in prior histological investigations to correlate with significant inflammatory cell infiltration and the lack of apical barrier tissue development . This inflammatory process is thought to have impeded the repair of periodontal tissue, hence interfering with the formation of the hard tissue barrier. It has been recommended to employ a matrix at the periapex in wide-open apices to control the compaction of MTA material and prevent its extrusion. A variety of biocompatible materials have been documented in the literature for this purpose, including dentin chips, bovine bone xenografts, calcium phosphate, oxidized cellulose, and platelet-rich fibrin . In the current study, we used a calcium silicate bioceramic material (TotalFill ® BC RRM™ Putty) as an apical plug, which is a pre-mixed condensable putty that allows for controlled administration without the necessity of an apical matrix. An interdisciplinary approach, along with accurate diagnostics, is essential for achieving improved, conservative, and predictable outcomes in aesthetic areas. The endodontist performs a crucial role in advising patients regarding the decision-making process between tooth preservation and extraction. This encompasses a discussion of the advantages, risks, and long-term consequences related to each of the options . In regard to the present case, endodontic therapy, succeeded by post-core-crown restoration, was identified as the preferred treatment modality. Nonetheless, in accordance with the patient’s preferences, the treatment plan was amended to accommodate extraction followed by implant replacement. Orthodontic extrusion is being implemented as a treatment modality that enhance both hard and soft tissue aspects prior to the implantation of dental implants . The patient was satisfied with the color, morphology, and margins of the cemented restoration. The present case demonstrates the clinical and radiographical success of an endodontically failed permanent incisor with an open apex after an apexification procedure. A two-year follow-up visit revealed the absence of signs and symptoms and hard tissue formation at the root apex. The histological evaluation of the newly formed mineralized tissue at the root apex revealed the formation of a continuous layer of dentin-like tissue with an identifiable dentinal tubule structure and the formation of an incremental layers of cementum-like tissue. In addition, connective tissue with distinct collagen fibers and connective tissue with calcified areas were noted. |
Proteins Involved in Endothelial Function and Inflammation Are Implicated in Cerebral Small Vessel Disease | 1af12eed-001f-453c-a59f-9c06a7c87a0b | 7617319 | Biochemistry[mh] | The study is reported following the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines for observational and Mendelian randomization studies ( Supplemental Material ). Data Availability Data from the UK Biobank are available to researchers through application at http://www.ukbiobank.ac.uk/using-the-resource/ . The GWAS summary statistics are available on the GWAS Catalogue with accession numbers detailed in the Supplemental Methods . Study Populations of the Proteomic Data The UKB (UK Biobank) recruited over 500 000 participants from 2006 to 2010. Their blood was sampled for genotype and biomarker assessments. From 2020 to 2021, the UKB-PPP (UKB Pharma Proteomics Project) randomly retrieved 46 595 blood samples collected at baseline to undergo proteomic assessment covering 2922 proteins using the Olink Explore 3072 Assay. For the proteome-wide GWAS, 34 557 participants with European ancestry formed the discovery cohort, while the remaining participants became the replication cohort. The detailed procedures of proteomic profiling and data processing have been reported by Sun et al. The Iceland 36K is a population-based study involving 35 892 Icelanders recruited from 2000 to 2019. The participants’ plasma samples were measured with the SomaScan version 4 Assay, capturing 4670 proteins. The detailed study protocol has been published by Eldjarn et al. Mendelian Randomization A hypothesis-driven approach was used to select candidate proteins to be screened using MR. Literature reviews were performed to identify proteins involved in endothelial dysfunction, inflammation, blood-brain barrier breakdown, oxidative stress, neuro-glia-vascular unit, and vascular remodeling in the context of SVD, , vascular cognitive impairment, dementia-causing diseases, , and cardiovascular diseases ( Supplemental Methods ; Table S1 ). Their genetic data availability was checked among the 5758 nonoverlapped assays analyzed in the UKB-PPP and Iceland 36K studies, using a keyword-based search strategy adapted from Lindbohm et al. In addition to the proteins selected from the literature review, we included all 736 protein assays from the Olink Inflammation Panel I and II. After removing duplicates, we filtered the selected proteins based on the quality and feasibility of their genetic data with the criteria defined a priori ( Figure S1 ). In total, 996 protein assays were prioritized for MR. GWAS summary statistics for the proteins were extracted from the UKB-PPP discovery cohort or the Iceland 36K study normalized set. For overlapping assays, those from the UKB-PPP were utilized to maintain consistency between the MR analysis and the regression analyses performed within the UKB cohort. The summary statistics for WMH (n=55 291), MD (n=36 460), and FA (n=36 533) were obtained from Koohi et al. The summary statistics for CMB (n=3556 cases of any brain microbleeds, 22 306 controls) were obtained from Knol et al. The summary statistics for EPVS in white matter (n=9324 cases, 29 274 controls) were obtained from Duperron et al. The summary statistics for LS (n=6030 cases, 248 929 controls) diagnosed with TOAST (Trial of ORG 10172 in Acute Stroke Treatment) criteria or MRI evidence were obtained from Traylor et al. In addition, because MRI phenotyping of LS is more accurate, we performed a secondary analysis using a GWAS on 3199 exclusively MRI-confirmed LS cases, comprising 2612 cases from Traylor et al and 587 additional cases. GWAS summary statistics for individuals of European ancestry were used for all proteins and outcomes except for CMB, which included 3% of participants from other ancestries. Further details of the outcome of GWAS are provided in the Supplemental Methods and Table S2 . Uncorrelated single nucleotide polymorphisms (SNPs; r 2 <0.01) in cis association with the proteins (±1 Mb of the gene-coding region) , and below genome-wide significance level ( P <5×10 −8 ) were eligible as MR instruments. Nineteen of the total 5976 pairs (996 proteins×6 outcomes) could not be tested because neither the instrument nor its proxy existed in the outcome GWAS ( Supplemental Methods ). Overall 5957 pairs were tested using the TwoSampleMR (version 0.6.8) and MendelianRandomization (version 0.9.0) R packages. The fixed-effect inverse-variance weighted method was used when at least 2 instrument SNPs were available. A Wald ratio was calculated when only 1 instrument SNP was present. F statistics were calculated to quantify the strength of the instruments. MR pleiotropy residual sum and outlier tests were performed to identify possible horizontal pleiotropy. A false discovery rate (FDR) threshold of 5% was used to control for multiple testing across the 6 outcomes. Four sensitivity analyses were performed among the proteins identified with causal evidence from the primary MR ( Supplemental Methods ). First, the instruments were changed from those selected based on linkage disequilibrium clumping to independent cis pQTLs derived from the conditional analyses by Sun et al or Eldjarn et al. Second, to mitigate the possibility of confounding by linkage disequilibrium, additional MR tests were performed to assess the associations of neighboring proteins of the candidates with SVD. Third, an external replication analysis was conducted using the overlapped assays on the SomaScan platform with the same MR approach. Fourth, multivariable MR was performed to estimate the direct effect of each protein conditional on systolic blood pressure, a major risk factor for SVD. Colocalization Pairwise colocalization was performed to identify shared genetic variants coregulating the plasma level of each candidate protein and the 6 outcomes using the coloc (version 5.2.3) R package. To minimize false positives, each genetic region was narrowed down to ±200 kb window surrounding the protein-coding gene. A sensitivity analysis was performed using the ±1 Mb cis window. The default priors ( p 1 =1×10 −4 , p 2 =1×10 − 4 , p 12 =1×10 − 5 ) were used. A posterior probability (PP) threshold of H 4 >0.8 was defined. For the pairs with high H 3 PP, conditional colocalization was conducted using the coloc.susie R function in case true colocalizing signals were masked by the presence of multiple association signals in the region. In addition, we examined whether gene expression of the candidate proteins had been identified in brain cells or peripheral blood mononuclear cells (PBMCs) in reference to the findings of Bryois et al and Yazar et al ( Supplemental Methods ). For the proteins with gene expression found in these cells, we further queried if their cis single-cell expression QTL (sc-eQTL) had been identified. If so, we matched their cis sc-eQTLs to the same loci in the GWAS summary statistics of the proteins and SVD traits. We then reviewed and reported the associations between these genetic loci and the plasma protein levels or SVD traits. Associations With Cognition, Dementia, and Stroke Epidemiological analyses were performed in the UKB-PPP cohort to investigate whether circulating protein abundance was associated with baseline cognitive performance or future risk of all-cause dementia or any stroke during the prospective follow-up. Prevalent dementia or stroke cases at baseline were excluded. The baseline values of normalized protein expression were obtained for the 13 candidate proteins identified as causal from the primary MR. They were further inverse-rank normalized to minimize outlier effects and ensure comparability across proteins. Individuals with missing normalized protein expression values were excluded from the analysis on a protein-by-protein basis. Two sets of secondary analyses were performed, with covariate characterization and modeling detailed in the Supplemental Methods . First, for both the cross-sectional and survival analyses, we additionally adjusted for Townsend deprivation index (continuous), body mass index (kg/m 2 ), smoking (current/not current), alcohol drinking (current/not current), systolic blood pressure (mm Hg), total cholesterol (mmol/L), LDL (low-density lipoprotein) cholesterol (mmol/L), and baseline diabetes (yes/no). Education (years) was also adjusted when the outcomes were cognitive tests. Second, we performed mediation analyses estimating (1) the direct effect of each protein on the risk of dementia or stroke and (2) the proportion of mediation by systolic blood pressure, adjusting for the same set of covariates as in the primary analyses. Overall, we used data from baseline assessment for all covariates where possible. If an individual was missing data from the baseline visit for a particular covariate, we used data from the earliest available repeat assessment for that individual if available. Participants with missing values for any of the covariates and any of the 4 outcomes were excluded (n=4505, ≈8.7% of the UKB-PPP cohort). Multiple testing was corrected with an FDR threshold of 5%, accounting for the 13 candidate proteins identified from the MR analysis with each outcome. Cross-Sectional Analysis on Cognition At baseline, all participants were invited to complete reaction time and pairs matching tests as measures of processing speed and visuospatial memory, respectively ( Supplemental Methods ). These 2 test scores were used as outcomes for the cross-sectional analyses, with higher points indicating longer reaction time or more matching errors. Both cognitive test scores were highly skewed. Therefore, reaction time was natural log-transformed before being fitted in the multivariable linear regression, and the number of pair-matching errors was modeled using negative binomial regression because this value was discrete and zero-inflated. Based on the coefficient estimates, we calculated the % change in reaction time or matching errors per 1-unit increase in each protein’s normalized value. The primary models were adjusted for age and sex. Survival Analysis on Dementia and Stroke All-cause dementia and all-cause stroke were used as the outcomes ( Supplemental Methods ), which were ascertained by the UKB algorithm based on linked healthcare records and death certificates. , Fine-Gray models were applied to estimate the subdistribution hazard of dementia or stroke per 1-unit increase in each protein’s normalized value. For dementia, death from all other causes, including acute stroke, was accounted for as a competing risk and vice versa for stroke. As a sensitivity analysis, we also estimated cause-specific hazard ratios using Cox-proportional hazards models by treating death from other causes as a censoring mechanism. The proportional hazards assumption was examined by plotting the Schoenfeld residuals for each protein. To control for confounding, we adjusted for age and sex (male/female) in the primary models for stroke, and we controlled for age, sex, education (years), and APOE (apolipoprotein E) ε4 carrier status (yes/no) in the primary models for dementia. Age was modeled as linear and quadratic terms. Curation Using DrugBank Databases GREP (Genome for Repositioning Drugs) software was used to quantify the enrichment of protein candidates among the drug targets of clinical indication classes, including the International Classification of Diseases, Tenth Revision ( ICD-10 ) and the Anatomic Therapeutic Chemical classification. The drug targets queried in GREP cover the approved or investigated drugs in the DrugBank and the Therapeutic Target databases. We additionally searched each protein candidate in the DrugBank to obtain drug information on the developmental status and the mechanisms of action. Data from the UK Biobank are available to researchers through application at http://www.ukbiobank.ac.uk/using-the-resource/ . The GWAS summary statistics are available on the GWAS Catalogue with accession numbers detailed in the Supplemental Methods . The UKB (UK Biobank) recruited over 500 000 participants from 2006 to 2010. Their blood was sampled for genotype and biomarker assessments. From 2020 to 2021, the UKB-PPP (UKB Pharma Proteomics Project) randomly retrieved 46 595 blood samples collected at baseline to undergo proteomic assessment covering 2922 proteins using the Olink Explore 3072 Assay. For the proteome-wide GWAS, 34 557 participants with European ancestry formed the discovery cohort, while the remaining participants became the replication cohort. The detailed procedures of proteomic profiling and data processing have been reported by Sun et al. The Iceland 36K is a population-based study involving 35 892 Icelanders recruited from 2000 to 2019. The participants’ plasma samples were measured with the SomaScan version 4 Assay, capturing 4670 proteins. The detailed study protocol has been published by Eldjarn et al. A hypothesis-driven approach was used to select candidate proteins to be screened using MR. Literature reviews were performed to identify proteins involved in endothelial dysfunction, inflammation, blood-brain barrier breakdown, oxidative stress, neuro-glia-vascular unit, and vascular remodeling in the context of SVD, , vascular cognitive impairment, dementia-causing diseases, , and cardiovascular diseases ( Supplemental Methods ; Table S1 ). Their genetic data availability was checked among the 5758 nonoverlapped assays analyzed in the UKB-PPP and Iceland 36K studies, using a keyword-based search strategy adapted from Lindbohm et al. In addition to the proteins selected from the literature review, we included all 736 protein assays from the Olink Inflammation Panel I and II. After removing duplicates, we filtered the selected proteins based on the quality and feasibility of their genetic data with the criteria defined a priori ( Figure S1 ). In total, 996 protein assays were prioritized for MR. GWAS summary statistics for the proteins were extracted from the UKB-PPP discovery cohort or the Iceland 36K study normalized set. For overlapping assays, those from the UKB-PPP were utilized to maintain consistency between the MR analysis and the regression analyses performed within the UKB cohort. The summary statistics for WMH (n=55 291), MD (n=36 460), and FA (n=36 533) were obtained from Koohi et al. The summary statistics for CMB (n=3556 cases of any brain microbleeds, 22 306 controls) were obtained from Knol et al. The summary statistics for EPVS in white matter (n=9324 cases, 29 274 controls) were obtained from Duperron et al. The summary statistics for LS (n=6030 cases, 248 929 controls) diagnosed with TOAST (Trial of ORG 10172 in Acute Stroke Treatment) criteria or MRI evidence were obtained from Traylor et al. In addition, because MRI phenotyping of LS is more accurate, we performed a secondary analysis using a GWAS on 3199 exclusively MRI-confirmed LS cases, comprising 2612 cases from Traylor et al and 587 additional cases. GWAS summary statistics for individuals of European ancestry were used for all proteins and outcomes except for CMB, which included 3% of participants from other ancestries. Further details of the outcome of GWAS are provided in the Supplemental Methods and Table S2 . Uncorrelated single nucleotide polymorphisms (SNPs; r 2 <0.01) in cis association with the proteins (±1 Mb of the gene-coding region) , and below genome-wide significance level ( P <5×10 −8 ) were eligible as MR instruments. Nineteen of the total 5976 pairs (996 proteins×6 outcomes) could not be tested because neither the instrument nor its proxy existed in the outcome GWAS ( Supplemental Methods ). Overall 5957 pairs were tested using the TwoSampleMR (version 0.6.8) and MendelianRandomization (version 0.9.0) R packages. The fixed-effect inverse-variance weighted method was used when at least 2 instrument SNPs were available. A Wald ratio was calculated when only 1 instrument SNP was present. F statistics were calculated to quantify the strength of the instruments. MR pleiotropy residual sum and outlier tests were performed to identify possible horizontal pleiotropy. A false discovery rate (FDR) threshold of 5% was used to control for multiple testing across the 6 outcomes. Four sensitivity analyses were performed among the proteins identified with causal evidence from the primary MR ( Supplemental Methods ). First, the instruments were changed from those selected based on linkage disequilibrium clumping to independent cis pQTLs derived from the conditional analyses by Sun et al or Eldjarn et al. Second, to mitigate the possibility of confounding by linkage disequilibrium, additional MR tests were performed to assess the associations of neighboring proteins of the candidates with SVD. Third, an external replication analysis was conducted using the overlapped assays on the SomaScan platform with the same MR approach. Fourth, multivariable MR was performed to estimate the direct effect of each protein conditional on systolic blood pressure, a major risk factor for SVD. Pairwise colocalization was performed to identify shared genetic variants coregulating the plasma level of each candidate protein and the 6 outcomes using the coloc (version 5.2.3) R package. To minimize false positives, each genetic region was narrowed down to ±200 kb window surrounding the protein-coding gene. A sensitivity analysis was performed using the ±1 Mb cis window. The default priors ( p 1 =1×10 −4 , p 2 =1×10 − 4 , p 12 =1×10 − 5 ) were used. A posterior probability (PP) threshold of H 4 >0.8 was defined. For the pairs with high H 3 PP, conditional colocalization was conducted using the coloc.susie R function in case true colocalizing signals were masked by the presence of multiple association signals in the region. In addition, we examined whether gene expression of the candidate proteins had been identified in brain cells or peripheral blood mononuclear cells (PBMCs) in reference to the findings of Bryois et al and Yazar et al ( Supplemental Methods ). For the proteins with gene expression found in these cells, we further queried if their cis single-cell expression QTL (sc-eQTL) had been identified. If so, we matched their cis sc-eQTLs to the same loci in the GWAS summary statistics of the proteins and SVD traits. We then reviewed and reported the associations between these genetic loci and the plasma protein levels or SVD traits. Epidemiological analyses were performed in the UKB-PPP cohort to investigate whether circulating protein abundance was associated with baseline cognitive performance or future risk of all-cause dementia or any stroke during the prospective follow-up. Prevalent dementia or stroke cases at baseline were excluded. The baseline values of normalized protein expression were obtained for the 13 candidate proteins identified as causal from the primary MR. They were further inverse-rank normalized to minimize outlier effects and ensure comparability across proteins. Individuals with missing normalized protein expression values were excluded from the analysis on a protein-by-protein basis. Two sets of secondary analyses were performed, with covariate characterization and modeling detailed in the Supplemental Methods . First, for both the cross-sectional and survival analyses, we additionally adjusted for Townsend deprivation index (continuous), body mass index (kg/m 2 ), smoking (current/not current), alcohol drinking (current/not current), systolic blood pressure (mm Hg), total cholesterol (mmol/L), LDL (low-density lipoprotein) cholesterol (mmol/L), and baseline diabetes (yes/no). Education (years) was also adjusted when the outcomes were cognitive tests. Second, we performed mediation analyses estimating (1) the direct effect of each protein on the risk of dementia or stroke and (2) the proportion of mediation by systolic blood pressure, adjusting for the same set of covariates as in the primary analyses. Overall, we used data from baseline assessment for all covariates where possible. If an individual was missing data from the baseline visit for a particular covariate, we used data from the earliest available repeat assessment for that individual if available. Participants with missing values for any of the covariates and any of the 4 outcomes were excluded (n=4505, ≈8.7% of the UKB-PPP cohort). Multiple testing was corrected with an FDR threshold of 5%, accounting for the 13 candidate proteins identified from the MR analysis with each outcome. Cross-Sectional Analysis on Cognition At baseline, all participants were invited to complete reaction time and pairs matching tests as measures of processing speed and visuospatial memory, respectively ( Supplemental Methods ). These 2 test scores were used as outcomes for the cross-sectional analyses, with higher points indicating longer reaction time or more matching errors. Both cognitive test scores were highly skewed. Therefore, reaction time was natural log-transformed before being fitted in the multivariable linear regression, and the number of pair-matching errors was modeled using negative binomial regression because this value was discrete and zero-inflated. Based on the coefficient estimates, we calculated the % change in reaction time or matching errors per 1-unit increase in each protein’s normalized value. The primary models were adjusted for age and sex. Survival Analysis on Dementia and Stroke All-cause dementia and all-cause stroke were used as the outcomes ( Supplemental Methods ), which were ascertained by the UKB algorithm based on linked healthcare records and death certificates. , Fine-Gray models were applied to estimate the subdistribution hazard of dementia or stroke per 1-unit increase in each protein’s normalized value. For dementia, death from all other causes, including acute stroke, was accounted for as a competing risk and vice versa for stroke. As a sensitivity analysis, we also estimated cause-specific hazard ratios using Cox-proportional hazards models by treating death from other causes as a censoring mechanism. The proportional hazards assumption was examined by plotting the Schoenfeld residuals for each protein. To control for confounding, we adjusted for age and sex (male/female) in the primary models for stroke, and we controlled for age, sex, education (years), and APOE (apolipoprotein E) ε4 carrier status (yes/no) in the primary models for dementia. Age was modeled as linear and quadratic terms. At baseline, all participants were invited to complete reaction time and pairs matching tests as measures of processing speed and visuospatial memory, respectively ( Supplemental Methods ). These 2 test scores were used as outcomes for the cross-sectional analyses, with higher points indicating longer reaction time or more matching errors. Both cognitive test scores were highly skewed. Therefore, reaction time was natural log-transformed before being fitted in the multivariable linear regression, and the number of pair-matching errors was modeled using negative binomial regression because this value was discrete and zero-inflated. Based on the coefficient estimates, we calculated the % change in reaction time or matching errors per 1-unit increase in each protein’s normalized value. The primary models were adjusted for age and sex. All-cause dementia and all-cause stroke were used as the outcomes ( Supplemental Methods ), which were ascertained by the UKB algorithm based on linked healthcare records and death certificates. , Fine-Gray models were applied to estimate the subdistribution hazard of dementia or stroke per 1-unit increase in each protein’s normalized value. For dementia, death from all other causes, including acute stroke, was accounted for as a competing risk and vice versa for stroke. As a sensitivity analysis, we also estimated cause-specific hazard ratios using Cox-proportional hazards models by treating death from other causes as a censoring mechanism. The proportional hazards assumption was examined by plotting the Schoenfeld residuals for each protein. To control for confounding, we adjusted for age and sex (male/female) in the primary models for stroke, and we controlled for age, sex, education (years), and APOE (apolipoprotein E) ε4 carrier status (yes/no) in the primary models for dementia. Age was modeled as linear and quadratic terms. GREP (Genome for Repositioning Drugs) software was used to quantify the enrichment of protein candidates among the drug targets of clinical indication classes, including the International Classification of Diseases, Tenth Revision ( ICD-10 ) and the Anatomic Therapeutic Chemical classification. The drug targets queried in GREP cover the approved or investigated drugs in the DrugBank and the Therapeutic Target databases. We additionally searched each protein candidate in the DrugBank to obtain drug information on the developmental status and the mechanisms of action. Identification of Endothelial and Inflammatory Proteins Associated With SVD For each of the 996 proteins involved in endothelial function and inflammation, MR was used to evaluate its association with LS and 5 neuroimaging markers (Figure ). The primary analysis was performed for 5957 protein-outcome pairs ( Supplemental Material 1 ). Seventeen pairs (0.285%) were significant after multiple testing correction at 5% FDR, corresponding to a P threshold of 1.4×10 − 4 , 2-sided. All these pairs had strong instruments (F statistics > 10; Supplemental Material 2 ). MR pleiotropy residual sum and outlier test did not identify substantial horizontal pleiotropy for any of these pairs ( Supplemental Material 3 ). Of the 17 pairs covering 13 unique proteins, 9 proteins were associated with 1 imaging feature of SVD; 3 proteins (APOE, PEAR1 [platelet endothelial aggregation receptor 1], and HEXIM1 [hexamethylene bis-acetamide-inducible protein 1]) were associated with ≥2 neuroimaging markers; 1 protein (COL2A1 [collagen type II α-1 chain]) was associated with LS (Figure A). Coherent results were detected across features of high WMH volume, high MD, and low FA, all 3 suggesting white matter pathology. When the LS cases were restricted to those confirmed by MRI, no significant result was found after multiple testing correction at 5% FDR ( Supplemental Material 4 ). The association between COL2A1 and MRI-confirmed LS was not significant, although its effect was consistent with that in the primary result (odds ratio MRI-confirmed LS , 0.94 [95% CI, 0.86–1.03]; P =0.18 versus odds ratio LS primary , 0.89 [95% CI, 0.86–0.91]; P =5×10 −5 ). Sensitivity analyses were conducted among the 13 candidate proteins identified from the MR. First, the instruments were changed to the conditionally independent cis pQTLs reported by Sun et al. All pairs remained significant ( P <0.05, 2-sided) except for the CD46 (membrane cofactor protein)—lower FA pair ( Figure S2 ). Second, the protein-coding genes situated within ±200 kb of the 13 candidate genes were identified with the LocusZoom plots ( Supplemental Methods ). Sensitivity analysis indicated that HAVCR2 (hepatitis A virus cellular receptor 2) was associated with WMH and EPVS, and CR1 (complement receptor type 1) was associated with MD and lower FA, resembling those identified for TIMD4 (T-cell immunoglobulin and mucin domain–containing protein 4) and CD46 in the primary results, respectively ( Figure S3 ). Third, external replication was performed for 10 of the 13 candidate proteins overlapped between the Olink and SomaScan platforms ( Supplemental Material 5 ). Nine of these showed consistent results while the association for 1 protein, NPTX1 (neuronal pentraxin-1), attenuated although its effect direction remained consistent ( Figure S4 ). Fourth, in the multivariable MR, 10 of the 13 proteins showed significant and consistent effects as in the primary MR ( Supplemental Material 6 ). The associations for 2 proteins (PDE5A [cGMP-specific 3',5'-cyclic phosphodiesterase] and CD46) were no longer significant, but their effect directions remained consistent ( Figure S5 ). The association of HEXIM1 with white matter also attenuated; however, it showed a positive association with CMB. Shared Genetic Associations Between Candidate Proteins and SVD Among the 13 candidate proteins identified from the MR results, 4 proteins (METAP1D [methionine aminopeptidase 1D, mitochondrial], EPHA2 [ephrin type-A receptor 2], APOE, and PEAR1) were identified with genetic variants that coregulate their plasma abundance and SVD traits (colocalization PP.H 4 >0.8; Figure B; Supplemental Material 7 ). Of the 17 pairs found significant by the MR, 6 pairs (covering the 4 proteins) were colocalized and 1 pair MERTK (tyrosine-protein kinase Mer)-WMH showed a moderate probability of hypothesis 4 (PP.H 4 =0.65). Five pairs (involving TIMD4, PDE5A, FLT4 [vascular endothelial growth factor receptor 3], NPTX1, and COL2A1) were assigned with a high PP for hypothesis 1, possibly due to a lack of power ( Figure S6 ). Five other pairs (HEXIM1-WMH, HEXIM1-MD, HEXIM1–lower FA, MEGF10 [multiple epidermal growth factor-like domains protein 10]-MD, and CD46-lower FA) showed a high PP for hypothesis 3. For these 5 pairs, conditional colocalization was conducted in the Sum of Single Effects regression framework in case true colocalizing signals were masked by the presence of multiple association signals in the region. All 5 pairs were identified with conditional signals ( Supplemental Material 8 ). The sensitivity analysis using ±1 Mb window showed consistent results ( Supplemental Material 9 ). Taken together, 7 of the 13 proteins were colocalized with ≥1 SVD traits ( Figure S6 ). Referencing the single-cell sequencing studies, , we found that gene expression was detected for 9 proteins in brain cell types and 7 proteins in PBMCs ( Tables S3 and S4 ). Among them, cis sc-eQTLs had been identified for PDE5A and CD46 in excitatory neurons, inhibitory neurons, and oligodendrocytes. CD46 had sc-eQTLs found in oligodendrocyte precursor cells. Four proteins (TIMD4, FLT4, HEXIM1, and METAP1D) had cis sc-eQTLs identified in CD4 naive/central memory T cells. The sc-eQTLs for TIMD4, FLT4, HEXIM1, CD46, and PDE5A were strongly associated with their plasma levels; P values for the SNP-protein associations ranged from 1×10 − 5 to 1×10 − 58 based on the protein GWAS conducted in the UKB-PPP study ( Table S5 ). However, only one sc-eQTL (rs4632173 for the expression of HEXIM1 in CD4 naive/central memory T cell) showed a strong association with SVD traits (SNP-WMH association: P =1×10 − 8 ; SNP-MD association: P =1×10 − 4 ; Table S5 ). Circulating Protein Levels in Association With Cognition, Dementia, and Stroke Baseline characteristics of the study population are presented in Table S6 . Briefly, among the 47 571 participants included in our analysis, 1228 (2.6%) had developed all-cause dementia and 1268 had experienced any stroke (2.7%) at the time of our data extraction in November 2023. The average time to dementia and stroke was 9.2 and 8.2 years, respectively. The average follow-up was 14.0 years among the overall participants. In cross-sectional analyses adjusted for age and sex, increasing plasma abundance of 6 proteins (METAP1D, EPHA2, TIMD4, FLT4, NPTX1, and HEXIM1) were associated with prolonged reaction time (FDR-corrected P <0.05; Figure A). Conversely, a 1-unit increase in COL2A1 abundance was associated with shorter reaction time (% change, −0.59% [95% CI, −0.78% to −0.41%]) and fewer matching errors (% change, −1.47% [95% CI, −2.21% to −0.73%]; Figure A and B). No other significant associations were observed for the pairs matching test (Figure B). Of the 7 proteins that showed a significant effect on either cognitive test, 5 had consistent effects with those identified in the MR analysis (COL2A1, METAP1D, EPHA2, TIMD4, and FLT4). Adjusting for demographic and vascular risk factors in the secondary models did not meaningfully change the results from the primary models ( Supplemental Material 10 ). In the survival analyses with the Fine-Gray models using an FDR-corrected P <0.05 as the threshold, 4 of the 13 protein candidates were associated with all-cause dementia (EPHA2, APOE, PDE5A, and MERTK) after adjusting for age, sex, education, and APOE ε4 carrier status and 5 (METAP1D, EPHA2, TIMD4, MERTK, and CD46) were associated with any stroke conditioning on age and sex (Figure A and B). EPHA2 was significantly associated with both dementia and stroke, and their effect directions were consistent with the MR findings. Although significant results were also identified for MERTK with both outcomes and for CD46 with stroke, their hazard ratios suggested opposite effects to those found in the MR analysis. The other 4 proteins (APOE, PDE5A, METAP1D, and TIMD4) were associated with either dementia or stroke with consistent effects as the MR analysis. Consistent results were observed across the primary Fine-Gray models, the secondary models adjusting for demographic and vascular risk factors ( Supplemental Material 11 ) and the cause-specific Cox models ( Supplemental Material 12 ). Visual examination of the Schoenfeld residuals for each protein did not identify significant violation of the proportional hazards assumption. In the mediation analyses, the significant protein-outcome pairs identified from the survival analyses all remained significant (FDR<0.05) in terms of their direct effects except METAP1D ( Supplemental Material 13 ). For any protein-dementia pair, the proportion of mediation by systolic blood pressure was not significant; for all protein-stroke pairs, the proportions of mediation were only moderate with a range of −23% to 30%, despite their significance ( Supplemental Material 13 ). Druggability of Candidate Proteins Of the 13 proteins, 6 (COL2A1, EPHA2, FLT4, MERTK, PDE5A, and APOE) have been investigated and tested as drug targets (Table). We conducted an analysis using GREP software, which indicated enrichments in antineoplastic and immunomodulating agents targeting EPHA2, FLT4, and MERTK ( Supplemental Material 14 and 15 ). Moreover, PDE5A was enriched as a therapeutic target for diseases affecting cardiovascular, respiratory, and genitourinary systems. In the DrugBank database, collagenase clostridium histolyticum, a drug that was under investigation, was found to regulate COL2A1 levels. Supplements that affect zinc availability were also shown to modulate APOE level via their interactions; however, their effects may not be specific. The medications targeting these proteins, their mechanisms of action, and their developmental status are summarized in the Table. The results for the 13 proteins are summarized in Figure A in the order of descending level of confidence. A Venn diagram illustrates the 13 proteins according to their roles in different pathways underlying endothelial function and inflammation (Figure B). For each of the 996 proteins involved in endothelial function and inflammation, MR was used to evaluate its association with LS and 5 neuroimaging markers (Figure ). The primary analysis was performed for 5957 protein-outcome pairs ( Supplemental Material 1 ). Seventeen pairs (0.285%) were significant after multiple testing correction at 5% FDR, corresponding to a P threshold of 1.4×10 − 4 , 2-sided. All these pairs had strong instruments (F statistics > 10; Supplemental Material 2 ). MR pleiotropy residual sum and outlier test did not identify substantial horizontal pleiotropy for any of these pairs ( Supplemental Material 3 ). Of the 17 pairs covering 13 unique proteins, 9 proteins were associated with 1 imaging feature of SVD; 3 proteins (APOE, PEAR1 [platelet endothelial aggregation receptor 1], and HEXIM1 [hexamethylene bis-acetamide-inducible protein 1]) were associated with ≥2 neuroimaging markers; 1 protein (COL2A1 [collagen type II α-1 chain]) was associated with LS (Figure A). Coherent results were detected across features of high WMH volume, high MD, and low FA, all 3 suggesting white matter pathology. When the LS cases were restricted to those confirmed by MRI, no significant result was found after multiple testing correction at 5% FDR ( Supplemental Material 4 ). The association between COL2A1 and MRI-confirmed LS was not significant, although its effect was consistent with that in the primary result (odds ratio MRI-confirmed LS , 0.94 [95% CI, 0.86–1.03]; P =0.18 versus odds ratio LS primary , 0.89 [95% CI, 0.86–0.91]; P =5×10 −5 ). Sensitivity analyses were conducted among the 13 candidate proteins identified from the MR. First, the instruments were changed to the conditionally independent cis pQTLs reported by Sun et al. All pairs remained significant ( P <0.05, 2-sided) except for the CD46 (membrane cofactor protein)—lower FA pair ( Figure S2 ). Second, the protein-coding genes situated within ±200 kb of the 13 candidate genes were identified with the LocusZoom plots ( Supplemental Methods ). Sensitivity analysis indicated that HAVCR2 (hepatitis A virus cellular receptor 2) was associated with WMH and EPVS, and CR1 (complement receptor type 1) was associated with MD and lower FA, resembling those identified for TIMD4 (T-cell immunoglobulin and mucin domain–containing protein 4) and CD46 in the primary results, respectively ( Figure S3 ). Third, external replication was performed for 10 of the 13 candidate proteins overlapped between the Olink and SomaScan platforms ( Supplemental Material 5 ). Nine of these showed consistent results while the association for 1 protein, NPTX1 (neuronal pentraxin-1), attenuated although its effect direction remained consistent ( Figure S4 ). Fourth, in the multivariable MR, 10 of the 13 proteins showed significant and consistent effects as in the primary MR ( Supplemental Material 6 ). The associations for 2 proteins (PDE5A [cGMP-specific 3',5'-cyclic phosphodiesterase] and CD46) were no longer significant, but their effect directions remained consistent ( Figure S5 ). The association of HEXIM1 with white matter also attenuated; however, it showed a positive association with CMB. Among the 13 candidate proteins identified from the MR results, 4 proteins (METAP1D [methionine aminopeptidase 1D, mitochondrial], EPHA2 [ephrin type-A receptor 2], APOE, and PEAR1) were identified with genetic variants that coregulate their plasma abundance and SVD traits (colocalization PP.H 4 >0.8; Figure B; Supplemental Material 7 ). Of the 17 pairs found significant by the MR, 6 pairs (covering the 4 proteins) were colocalized and 1 pair MERTK (tyrosine-protein kinase Mer)-WMH showed a moderate probability of hypothesis 4 (PP.H 4 =0.65). Five pairs (involving TIMD4, PDE5A, FLT4 [vascular endothelial growth factor receptor 3], NPTX1, and COL2A1) were assigned with a high PP for hypothesis 1, possibly due to a lack of power ( Figure S6 ). Five other pairs (HEXIM1-WMH, HEXIM1-MD, HEXIM1–lower FA, MEGF10 [multiple epidermal growth factor-like domains protein 10]-MD, and CD46-lower FA) showed a high PP for hypothesis 3. For these 5 pairs, conditional colocalization was conducted in the Sum of Single Effects regression framework in case true colocalizing signals were masked by the presence of multiple association signals in the region. All 5 pairs were identified with conditional signals ( Supplemental Material 8 ). The sensitivity analysis using ±1 Mb window showed consistent results ( Supplemental Material 9 ). Taken together, 7 of the 13 proteins were colocalized with ≥1 SVD traits ( Figure S6 ). Referencing the single-cell sequencing studies, , we found that gene expression was detected for 9 proteins in brain cell types and 7 proteins in PBMCs ( Tables S3 and S4 ). Among them, cis sc-eQTLs had been identified for PDE5A and CD46 in excitatory neurons, inhibitory neurons, and oligodendrocytes. CD46 had sc-eQTLs found in oligodendrocyte precursor cells. Four proteins (TIMD4, FLT4, HEXIM1, and METAP1D) had cis sc-eQTLs identified in CD4 naive/central memory T cells. The sc-eQTLs for TIMD4, FLT4, HEXIM1, CD46, and PDE5A were strongly associated with their plasma levels; P values for the SNP-protein associations ranged from 1×10 − 5 to 1×10 − 58 based on the protein GWAS conducted in the UKB-PPP study ( Table S5 ). However, only one sc-eQTL (rs4632173 for the expression of HEXIM1 in CD4 naive/central memory T cell) showed a strong association with SVD traits (SNP-WMH association: P =1×10 − 8 ; SNP-MD association: P =1×10 − 4 ; Table S5 ). Baseline characteristics of the study population are presented in Table S6 . Briefly, among the 47 571 participants included in our analysis, 1228 (2.6%) had developed all-cause dementia and 1268 had experienced any stroke (2.7%) at the time of our data extraction in November 2023. The average time to dementia and stroke was 9.2 and 8.2 years, respectively. The average follow-up was 14.0 years among the overall participants. In cross-sectional analyses adjusted for age and sex, increasing plasma abundance of 6 proteins (METAP1D, EPHA2, TIMD4, FLT4, NPTX1, and HEXIM1) were associated with prolonged reaction time (FDR-corrected P <0.05; Figure A). Conversely, a 1-unit increase in COL2A1 abundance was associated with shorter reaction time (% change, −0.59% [95% CI, −0.78% to −0.41%]) and fewer matching errors (% change, −1.47% [95% CI, −2.21% to −0.73%]; Figure A and B). No other significant associations were observed for the pairs matching test (Figure B). Of the 7 proteins that showed a significant effect on either cognitive test, 5 had consistent effects with those identified in the MR analysis (COL2A1, METAP1D, EPHA2, TIMD4, and FLT4). Adjusting for demographic and vascular risk factors in the secondary models did not meaningfully change the results from the primary models ( Supplemental Material 10 ). In the survival analyses with the Fine-Gray models using an FDR-corrected P <0.05 as the threshold, 4 of the 13 protein candidates were associated with all-cause dementia (EPHA2, APOE, PDE5A, and MERTK) after adjusting for age, sex, education, and APOE ε4 carrier status and 5 (METAP1D, EPHA2, TIMD4, MERTK, and CD46) were associated with any stroke conditioning on age and sex (Figure A and B). EPHA2 was significantly associated with both dementia and stroke, and their effect directions were consistent with the MR findings. Although significant results were also identified for MERTK with both outcomes and for CD46 with stroke, their hazard ratios suggested opposite effects to those found in the MR analysis. The other 4 proteins (APOE, PDE5A, METAP1D, and TIMD4) were associated with either dementia or stroke with consistent effects as the MR analysis. Consistent results were observed across the primary Fine-Gray models, the secondary models adjusting for demographic and vascular risk factors ( Supplemental Material 11 ) and the cause-specific Cox models ( Supplemental Material 12 ). Visual examination of the Schoenfeld residuals for each protein did not identify significant violation of the proportional hazards assumption. In the mediation analyses, the significant protein-outcome pairs identified from the survival analyses all remained significant (FDR<0.05) in terms of their direct effects except METAP1D ( Supplemental Material 13 ). For any protein-dementia pair, the proportion of mediation by systolic blood pressure was not significant; for all protein-stroke pairs, the proportions of mediation were only moderate with a range of −23% to 30%, despite their significance ( Supplemental Material 13 ). Of the 13 proteins, 6 (COL2A1, EPHA2, FLT4, MERTK, PDE5A, and APOE) have been investigated and tested as drug targets (Table). We conducted an analysis using GREP software, which indicated enrichments in antineoplastic and immunomodulating agents targeting EPHA2, FLT4, and MERTK ( Supplemental Material 14 and 15 ). Moreover, PDE5A was enriched as a therapeutic target for diseases affecting cardiovascular, respiratory, and genitourinary systems. In the DrugBank database, collagenase clostridium histolyticum, a drug that was under investigation, was found to regulate COL2A1 levels. Supplements that affect zinc availability were also shown to modulate APOE level via their interactions; however, their effects may not be specific. The medications targeting these proteins, their mechanisms of action, and their developmental status are summarized in the Table. The results for the 13 proteins are summarized in Figure A in the order of descending level of confidence. A Venn diagram illustrates the 13 proteins according to their roles in different pathways underlying endothelial function and inflammation (Figure B). Within a hypothesis-driven framework, a total of 996 proteins related to endothelial dysfunction and inflammation were assessed in their associations with SVD using MR. MR evidence supported 1 protein (COL2A1) associated with LS and 12 additional proteins (EPHA2, APOE, PEAR1, FLT4, TIMD4, PDE5A, MEGF10, MERTK, NPTX1, HEXIM1, METAP1D, and CD46) associated with ≥1 neuroimaging features of SVD. Colocalization analysis suggested that 7 of the 13 proteins (EPHA2, APOE, PEAR1, MEGF10, HEXIM1, METAP1D, and CD46) shared causal genetic variants with SVD. Using cross-sectional and survival analyses in the UKB-PPP cohort, 7 proteins (COL2A1, EPHA2, APOE, FLT4, PDE5A, TIMD4, and METAP1D) were found to be associated with information processing speed, visuospatial memory, incident all-cause dementia, or incident any stroke, with their effect directions consistent with the MR findings. We found the most consistent evidence for EPHA2 and APOE with the support of both MR and conventional epidemiological analyses (Figure A). Eph receptor and ephrin signaling is involved in proinflammatory gene expression. In a mouse model of focal stroke, EPHA2 deletion was shown to reduce MMP-9 (matrix metalloproteinase-9) expression and leukocyte infiltration while increasing expression of a tight junction protein, zona occludens-1. In an endothelial cell line of human brain microvasculature, phosphorylation of EPHA2 receptor was identified to disrupt tight junction. Both studies were consistent with our findings in which increasing plasma EPHA2 abundance was associated with white matter damage, prolonged reaction time, and increased risks of dementia and stroke. APOE has been studied for decades. Consistent with the prior study, we found reduced plasma APOE levels were associated with white matter damage, CMBs, and increased risk of all-cause dementia independent of APOE genotypes (Figures A and A). Recent research has found that APOE can inhibit classical complement cascade by binding to C1q (complement component 1q), pointing to another probable inflammation-mediated pathway to SVD. We considered PEAR1 as another causal candidate. It had consistent evidence across MR sensitivity and colocalization analyses, although its associations with cognition, dementia, and stroke were not statistically significant. PEAR1, also known as JEDI (Jagged and Delta protein) or MEGF12 (multiple epidermal growth factor-like domains protein 12), mediates the phagocytosis of apoptotic neurons. Moreover, PEAR1 has been identified as a high-affinity receptor for SVEP1 (Sushi, von Willebrand Factor type-A, EGF, and pentraxin–domain containing 1) protein. Prior human studies have observed SEVP1 in its associations with inflammation in atherosclerotic plaques, WMH, and dementia. It will be of interest to investigate whether the interaction between PEAR1 and SVEP1 plays a role in SVD. We identified FLT4, TIMD4, and COL2A1 as likely candidates. Consistent with the MR analyses, these proteins also showed significant associations with either cognitive performance, dementia, or stroke (Figure A). Specifically, FLT4 (ie, VEGFR3) is the receptor for vascular endothelial growth factors C and D. In human carotid artery specimen, FLT4 was found to express in monocytes or macrophages in atherosclerotic lesions, where it could regulate immune cell apoptosis and plaque stability. Another protein, TIMD4 (ie, Tim-4), functions to mediate efferocytosis and cytokine production together with its genetic neighbors, HAVCR1 (hepatitis A virus cellular receptor 1; ie, Tim-1) and HAVCR2 (ie, Tim-3). Our MR sensitivity analysis also showed that both TIMD4 and HAVCR2 might be associated with WMH. Lastly, collagen type II α-1 chain (COL2A1) is an extracellular matrix protein whose protein family members, COL4A1 and COL4A2 (collagen type-IV α-1 and α-2 chains), have been shown to play essential roles in SVD pathogenesis. We found less consistent evidence for the proteins PDE5A, MEGF10, MERTK, and NPTX1. However, each of them was supported by at least 3 of the methods and involved in relevant biological processes. PDE (phosphodiesterase) has been shown to regulate the activation of platelets and their interaction with inflamed endothelial cells. MEGF10 mediates efferocytosis as a receptor for C1q which signals apoptosis. Interestingly, another candidate, MERTK can collaborate with both TIMD4 and MEGF10 on efferocytosis. , NPTX1 together with its family members NPTX2 and NPTX receptor has also been implicated in complement regulation and cognitive impairment. Despite being identified from the primary MR, HEXIM1, METAP1D, and CD46 were subject to further validation. The gene HEXIM1 is located downstream of PLCD3 (Phospholipase C-delta-3 ), which has been mapped to a GWAS signal for blood pressure. Further research could examine whether it is the HEXIM1-regulated inflammation or PLCD3-linked hypertension that is causal for SVD or whether they correspond to different SVD mechanisms. METAP1D is upstream of metabolism of homocysteine, a marker of vascular inflammation. Although METAP1D showed consistent results across the primary MR, colocalization, and epidemiological analyses, it only had 1 instrument SNP so it could not be examined in some MR sensitivity analyses. CD46 showed opposite effects between the MR and the regression analyses, whereas its sensitivity analyses indicated null results. However, its genetic neighbor, CR1, showed associations with MD and FA, although they did not pass the FDR threshold in the primary MR. Intriguingly, CD46 and CR1 belong to the same complement pathway, and CR1 has been associated with Alzheimer disease risk. However, whether this pathway is causal for SVD needs further validation. Of the 13 candidate proteins, 7 (APOE, PEAR1, TIMD4, MEGF10, NPTX1, MERTK, and CD46) are potentially involved in complement regulation and efferocytosis, as well as the downstream regulation of inflammation (Figure B). This body of evidence suggests that inhibition of excessive complement activation may be an important pathway to target. Two proteins PDE5A and PEAR1 are involved in the regulation of platelet activation, suggesting that antiplatelet therapies may be beneficial. Six of the 13 proteins have been targeted in pharmaceutical products (Table). Drugs that inhibit COL2A1, MERTK, and PDE5 have been developed; however, the optimal targeted levels of these proteins need to be ascertained. The effects of these proteins may also depend on whether it is lifelong exposure (as proxied by MR) or there are critical windows. EPHA2 and FLT4 have been investigated as promising targets for cancer treatment. Small molecule TKIs (tyrosine kinase inhibitors) for EPHA2 and FLT4, such as dasatinib and regorafenib, have been applied in the clinical setting with good efficacy. However, this class of drugs has been reported with adverse events. For all the candidate proteins, we must go beyond their plasma level and understand their tissue-specific mechanisms. A targeted modulation instead of a simplified inhibition may help prevent their unintended effects and tailor them to manage SVD. Our study has several limitations. First, instead of a proteome-wide screen, we used a hypothesis-driven approach to filter the proteins related to endothelial function and inflammation. Although this strategy may introduce bias in favor of the well-studied proteins, our selection methods and criteria were formulated a priori to ensure objectiveness. The number of proteins included in our screen also substantially increased compared with previous studies in SVD. Second, to ensure the proteogenomic data were well quality-controlled and replicable, we filtered the proteins based on their coefficient of variation, percentage below the lower limit of detection, and replicated cis pQTLs using the thresholds defined a priori. Although these exclusion criteria helped to ensure robust results, we may have missed some interesting candidates, particularly the proteins with trans pQTLs or rare variants. Third, due to lack of ancestry-specific data from the prior GWAS, our genetic analyses were performed using samples of European ancestry. The generalizability of our MR findings needs to be examined in other racial and ethnic groups. Fourth, due to lack of power and full access to the summary statistics in the single-cell studies, we were unable to perform formal colocalization tests among the eQTLs, pQTLs, and GWAS signals across brain or immune cell types. However, based on a direct mapping of the sc-eQTLs to the SNPs assessed in the protein and outcome GWAS, we were able to observe concordant signals coregulating a gene’s cell type–specific expression and its plasma protein abundance. This finding further suggests that circulating protein levels were correlated with those in the disease-related cell types. Our findings suggest the roles of endothelial-platelet function and complement-mediated regulation of inflammation in SVD. Future research is necessary to elucidate the pathogenic pathways influenced by these proteins and evaluate the therapeutic potential of each candidate for SVD treatment. Acknowledgments This study made use of the UK Biobank resource under application number 36509. Sources of Funding This research was supported by a joint grant from the British Heart Foundation (ref: SP/F/22/150028) and Dutch Heart Foundation (project 02-001-2021-B021) to Drs Markus, Mallat, de Leeuw, and Riksen. Infrastructural support was provided by the Cambridge British Heart Foundation Center of Research Excellence (RE/18/1/34212) and the Cambridge University Hospitals National Institute for Health and Care Research (NIHR) Biomedical Research Center (NIHR203312). Dr Harshfield was supported by the Alzheimer’s Society (AS-RF-21-017). Dr Riksen was supported by a CardioVasculair Onderzoek Nederland (CVON) grant from the Dutch Cardiovascular Alliance (DCVA) and Dutch Heart Foundation (CVON2018-27; IN-CONTROL II [Inflammatory Reprogramming by Ageing and Microbiome – Targets for Treatment of Cardiovascular Disease]). Disclosures None. Supplemental Material Supplemental Methods Tables S1–S6 Figures S1–S6 Supplemental Material Sheets 1–15 This study made use of the UK Biobank resource under application number 36509. This research was supported by a joint grant from the British Heart Foundation (ref: SP/F/22/150028) and Dutch Heart Foundation (project 02-001-2021-B021) to Drs Markus, Mallat, de Leeuw, and Riksen. Infrastructural support was provided by the Cambridge British Heart Foundation Center of Research Excellence (RE/18/1/34212) and the Cambridge University Hospitals National Institute for Health and Care Research (NIHR) Biomedical Research Center (NIHR203312). Dr Harshfield was supported by the Alzheimer’s Society (AS-RF-21-017). Dr Riksen was supported by a CardioVasculair Onderzoek Nederland (CVON) grant from the Dutch Cardiovascular Alliance (DCVA) and Dutch Heart Foundation (CVON2018-27; IN-CONTROL II [Inflammatory Reprogramming by Ageing and Microbiome – Targets for Treatment of Cardiovascular Disease]). None. Supplemental Methods Tables S1–S6 Figures S1–S6 Supplemental Material Sheets 1–15 |
PGxQA: A Resource for Evaluating LLM Performance for Pharmacogenomic QA Tasks | eccc55a1-2c44-4a78-aae2-274355f651f5 | 11734741 | Pharmacology[mh] | Introduction 1.1. Pharmacogenetics Pharmacogenetics (PGx) is the study of the role of genetics on an individual’s response to medication, with the aim of bringing tools to the clinic that can utilize a patient’s genetic information to improve medication safety and efficacy. Genetic variations that lead to changes in the activity or availability of drug metabolizing enzymes (DMEs), receptors, channels, and other proteins involved in pharmacodynamics and pharmacokinetics can contribute strongly to interindividual variability in drug response, resulting in an increased risk of adverse drug reactions (ADRs) and nonresponse phenotypes. By identifying genetic markers that influence drug response, PGx enables healthcare providers to predict which patients are more likely to experience adverse reactions or treatment failure. This knowledge allows for more individually tailored medication regimens, optimizing therapeutic outcomes while minimizing the risk of side effects. The overarching goal of PGx is promoting personalized medicine, such that patients receive the right drug and the right dose, at the right time. In doing so, the field aims to improve patient outcomes, enhance medication safety, and reduce healthcare costs associated with ineffective or harmful treatments. Despite the availability of numerous well-characterized, clinically actionable PGx guidelines for widely used medications, the clinical implementation of PGx has been slow. Very few medical centers and clinics routinely use this technology. This gap is due to various factors such as a lack of awareness and education among healthcare providers, the constantly evolving body of PGx guidelines, and technical challenges in integrating PGx data into electronic health records (EHRs). The cost of PGx testing and variable insurance coverage can also pose significant financial barriers, while regulatory and legal concerns may also impact the extent of implementation of PGx testing in hospital systems. Lack of domain expertise and education among healthcare providers, patients, and researchers in particular poses a critical barrier to the implementation of PGx-guided therapies in clinical settings as this leads to difficulty understanding and interpreting test results, in addition to limited research conducted regarding the clinical impact of such technologies. 1.2. Existing PGx Resources and Limitations Given that there are many causes for interindividual variability in treatment response as well as a need for guidance in interpreting PGx screening results, multiple independent bodies of experts have published research and guidelines to inform PGx-guided treatment. The Clinical Pharmacogenetics Implementation Consortium (CPIC) is one such group that has generated a set of specific drug recommendations to guide prescribing practices in the presence of genetic test results. CPIC has established 43 evidence-based clinical guidelines for 151 commonly prescribed medications. These recommendations were created based on a large body of evidence showing the impact of known PGx alleles in altering drug metabolism or response. Level A refers to gene-drug pairs where genetic information “should be used” for prescribing decisions and alternative therapies or dosing are highly likely to be effective and safe. At least one moderate or strong action (change in prescribing) is recommended for Level A pairs. Level B refers to pairs where genetic information “could be used” to change prescribing because alternative therapies/dosing are extremely likely to be as effective and as safe as non-genetically based dosing. Other international committees with their own sets of guidelines include The Dutch Pharmacogenetics Working Group (DPWG), and the French National Network (Reseau) of Pharmacogenetics (RNPGx). The Pharmacogenomics Knowledge Base (PharmGKB), is a resource that aims to comprehensively aggregate, curate, and characterize PGx knowledge including the literature and guidelines from these distinct sources. While these resources are highly comprehensive, most require a moderate to high degree of domain knowledge to understand and interpret the provided information. Clinicians and patients, in particular, need PGx expertise to understand reports and utilize them to inform treatment decisions. Clinicians typically receive limited PGx training and therefore rely heavily on these resources for guidance. , - Moreover, differences among guidance sources and the rapid pace of new discoveries and guidelines create potential for misunderstandings and confusion. While PharmGKB curates, aggregates, and presents guidance across sources, clinicians, patients, and researchers may prefer an interface that allows them to query and access targeted information using natural language instead of menus and tables. 1.3. Opportunities for Large Language Models to Guide PGx Large language models (LLMs) represent a major advance in artificial intelligence, allowing for the creation of seemingly intelligent chatbots which can interpret questions and assist with various tasks. LLMs have shown promise in a variety of natural language tasks, including those in medicine. For example, chatbots using LLMs can accurately answer patient queries in a conversational manner preferred by patients. GPT-4 has also achieved human-level accuracy on the United States Medical Licensing Exam (USMLE), outperforming the minimum passing threshold on short answer and multiple-choice questions. LLMs have been proposed for integration into clinical workflows to handle administrative tasks, which include managing appointment scheduling by patient request, answering routine inquiries about medication or treatment plans, and assisting in the preparation of medical records. , Additionally, LLMs can support clinical decision-making by providing realtime information retrieval and analysis, potentially reducing the cognitive load on healthcare professionals and improving patient outcomes. For these reasons, advances in LLMs have created an exciting opportunity to build chatbots to assist with complex medical specialties like PGx, providing a powerful and intuitive interface to access pharmacogenetic knowledge. Despite the promise of LLMs in medicine, there are significant issues that must be addressed before widespread clinical integration. These models are limited to the information they were trained on and can produce fabricated responses with an authoritative and confident tone when lacking information. There are numerous examples of this phenomenon across disciplines, but this poses a particularly large barrier to use in healthcare, where real time patient decisions rely on the presence of accurate information and mistakes can cost lives. - Moreover, LLMs are costly to update and retrain as new information becomes available. - This poses a challenge in fields where clinical guidelines are routinely updated, such as in PGx, and even current state-of-the-art LLMs had their training data capped several months before the latest CPIC guideline release. Despite these risks, LLMs are already being employed by clinicians, patients, and researchers to answer medical questions and their performance must be studied in order to understand their limitations. 1.4. Prior work on LLMs for PGx PGx is a specialized area of medicine with limited and variable levels of coverage in the US medical and pharmacy curriculum. , - Despite this, PGx has a wide impact on several specialties due to the variety of drugs with actionable guidelines. Therefore, leveraging LLMs in this field has the potential to significantly enhance clinical practice and patient care. For instance, Murugan et al., used GPT-4 and retrieval-augmented generation (RAG) to build PGx4Statins, a PGx chatbot for answering questions about statin therapy guidelines. However, the limitations of LLMs may pose a particular risk in this field, as PGx guidelines are revised and updated irregularly as new evidence becomes available, and inaccurate or outdated advice may result in adverse drug reactions or treatment nonresponse. As such, any PGx chatbot would need to be thoroughly vetted before clinical implementation is possible. While the performance of LLMs at answering general medical questions has been demonstrated, there is limited data on how LLMs perform with PGx queries. Prior to now, there have been no comprehensive, publicly available benchmarks to assess the performance of LLM chatbots in answering PGx queries. PGx4Statins was benchmarked manually, requiring a team of scorers to rate LLM responses based on the criteria of accuracy, relevancy, risk management, language clarity, bias neutrality, empathetic sensitivity, citation support, and hallucination limitation on a 1-5 scale. While this likely represents a gold-standard approach for evaluating real-world performance of a PGx clinical chatbot, PGx4Statins was only able to be tested on a small number of questions and for a single drug, demonstrating the limitations of this evaluation strategy. As new chatbots and language models are released, a more scalable solution is needed to comprehensively test the accuracy of these tools, so that we can then prioritize top performers for more rigorous, labor-intensive testing. To address the absence of evaluation strategies for PGx chatbots, we have developed PGxQA, a resource for evaluating the performance of LLMs in a variety of PGx-related tasks for multiple identified stakeholders: patients, clinicians, and researchers. PGxQA consists of a large corpus of PGx questions generated directly from CPIC data resources, CPIC PGx guidance for Level A drug-gene pairs, or provided by experts in the field. In addition, PGxQA includes tools for higher throughput manual and automated evaluation of accuracy and completeness. PGxQA’s question set covers all of the CPIC Level A guidelines across several dimensions, such as translating genotypes into phenotypes, naming the dbSNP ID(s) for variant(s) that define a particular star-allele, and most importantly, translating phenotypes into clinical recommendations. These resources will help promote the responsible development of medical chatbots by allowing us to assess their knowledge of PGx topics, thus lowering barriers to implementation of PGx in the clinic and providing easier access to PGx knowledge for clinicians, patients, and researchers. Pharmacogenetics Pharmacogenetics (PGx) is the study of the role of genetics on an individual’s response to medication, with the aim of bringing tools to the clinic that can utilize a patient’s genetic information to improve medication safety and efficacy. Genetic variations that lead to changes in the activity or availability of drug metabolizing enzymes (DMEs), receptors, channels, and other proteins involved in pharmacodynamics and pharmacokinetics can contribute strongly to interindividual variability in drug response, resulting in an increased risk of adverse drug reactions (ADRs) and nonresponse phenotypes. By identifying genetic markers that influence drug response, PGx enables healthcare providers to predict which patients are more likely to experience adverse reactions or treatment failure. This knowledge allows for more individually tailored medication regimens, optimizing therapeutic outcomes while minimizing the risk of side effects. The overarching goal of PGx is promoting personalized medicine, such that patients receive the right drug and the right dose, at the right time. In doing so, the field aims to improve patient outcomes, enhance medication safety, and reduce healthcare costs associated with ineffective or harmful treatments. Despite the availability of numerous well-characterized, clinically actionable PGx guidelines for widely used medications, the clinical implementation of PGx has been slow. Very few medical centers and clinics routinely use this technology. This gap is due to various factors such as a lack of awareness and education among healthcare providers, the constantly evolving body of PGx guidelines, and technical challenges in integrating PGx data into electronic health records (EHRs). The cost of PGx testing and variable insurance coverage can also pose significant financial barriers, while regulatory and legal concerns may also impact the extent of implementation of PGx testing in hospital systems. Lack of domain expertise and education among healthcare providers, patients, and researchers in particular poses a critical barrier to the implementation of PGx-guided therapies in clinical settings as this leads to difficulty understanding and interpreting test results, in addition to limited research conducted regarding the clinical impact of such technologies. Existing PGx Resources and Limitations Given that there are many causes for interindividual variability in treatment response as well as a need for guidance in interpreting PGx screening results, multiple independent bodies of experts have published research and guidelines to inform PGx-guided treatment. The Clinical Pharmacogenetics Implementation Consortium (CPIC) is one such group that has generated a set of specific drug recommendations to guide prescribing practices in the presence of genetic test results. CPIC has established 43 evidence-based clinical guidelines for 151 commonly prescribed medications. These recommendations were created based on a large body of evidence showing the impact of known PGx alleles in altering drug metabolism or response. Level A refers to gene-drug pairs where genetic information “should be used” for prescribing decisions and alternative therapies or dosing are highly likely to be effective and safe. At least one moderate or strong action (change in prescribing) is recommended for Level A pairs. Level B refers to pairs where genetic information “could be used” to change prescribing because alternative therapies/dosing are extremely likely to be as effective and as safe as non-genetically based dosing. Other international committees with their own sets of guidelines include The Dutch Pharmacogenetics Working Group (DPWG), and the French National Network (Reseau) of Pharmacogenetics (RNPGx). The Pharmacogenomics Knowledge Base (PharmGKB), is a resource that aims to comprehensively aggregate, curate, and characterize PGx knowledge including the literature and guidelines from these distinct sources. While these resources are highly comprehensive, most require a moderate to high degree of domain knowledge to understand and interpret the provided information. Clinicians and patients, in particular, need PGx expertise to understand reports and utilize them to inform treatment decisions. Clinicians typically receive limited PGx training and therefore rely heavily on these resources for guidance. , - Moreover, differences among guidance sources and the rapid pace of new discoveries and guidelines create potential for misunderstandings and confusion. While PharmGKB curates, aggregates, and presents guidance across sources, clinicians, patients, and researchers may prefer an interface that allows them to query and access targeted information using natural language instead of menus and tables. Opportunities for Large Language Models to Guide PGx Large language models (LLMs) represent a major advance in artificial intelligence, allowing for the creation of seemingly intelligent chatbots which can interpret questions and assist with various tasks. LLMs have shown promise in a variety of natural language tasks, including those in medicine. For example, chatbots using LLMs can accurately answer patient queries in a conversational manner preferred by patients. GPT-4 has also achieved human-level accuracy on the United States Medical Licensing Exam (USMLE), outperforming the minimum passing threshold on short answer and multiple-choice questions. LLMs have been proposed for integration into clinical workflows to handle administrative tasks, which include managing appointment scheduling by patient request, answering routine inquiries about medication or treatment plans, and assisting in the preparation of medical records. , Additionally, LLMs can support clinical decision-making by providing realtime information retrieval and analysis, potentially reducing the cognitive load on healthcare professionals and improving patient outcomes. For these reasons, advances in LLMs have created an exciting opportunity to build chatbots to assist with complex medical specialties like PGx, providing a powerful and intuitive interface to access pharmacogenetic knowledge. Despite the promise of LLMs in medicine, there are significant issues that must be addressed before widespread clinical integration. These models are limited to the information they were trained on and can produce fabricated responses with an authoritative and confident tone when lacking information. There are numerous examples of this phenomenon across disciplines, but this poses a particularly large barrier to use in healthcare, where real time patient decisions rely on the presence of accurate information and mistakes can cost lives. - Moreover, LLMs are costly to update and retrain as new information becomes available. - This poses a challenge in fields where clinical guidelines are routinely updated, such as in PGx, and even current state-of-the-art LLMs had their training data capped several months before the latest CPIC guideline release. Despite these risks, LLMs are already being employed by clinicians, patients, and researchers to answer medical questions and their performance must be studied in order to understand their limitations. Prior work on LLMs for PGx PGx is a specialized area of medicine with limited and variable levels of coverage in the US medical and pharmacy curriculum. , - Despite this, PGx has a wide impact on several specialties due to the variety of drugs with actionable guidelines. Therefore, leveraging LLMs in this field has the potential to significantly enhance clinical practice and patient care. For instance, Murugan et al., used GPT-4 and retrieval-augmented generation (RAG) to build PGx4Statins, a PGx chatbot for answering questions about statin therapy guidelines. However, the limitations of LLMs may pose a particular risk in this field, as PGx guidelines are revised and updated irregularly as new evidence becomes available, and inaccurate or outdated advice may result in adverse drug reactions or treatment nonresponse. As such, any PGx chatbot would need to be thoroughly vetted before clinical implementation is possible. While the performance of LLMs at answering general medical questions has been demonstrated, there is limited data on how LLMs perform with PGx queries. Prior to now, there have been no comprehensive, publicly available benchmarks to assess the performance of LLM chatbots in answering PGx queries. PGx4Statins was benchmarked manually, requiring a team of scorers to rate LLM responses based on the criteria of accuracy, relevancy, risk management, language clarity, bias neutrality, empathetic sensitivity, citation support, and hallucination limitation on a 1-5 scale. While this likely represents a gold-standard approach for evaluating real-world performance of a PGx clinical chatbot, PGx4Statins was only able to be tested on a small number of questions and for a single drug, demonstrating the limitations of this evaluation strategy. As new chatbots and language models are released, a more scalable solution is needed to comprehensively test the accuracy of these tools, so that we can then prioritize top performers for more rigorous, labor-intensive testing. To address the absence of evaluation strategies for PGx chatbots, we have developed PGxQA, a resource for evaluating the performance of LLMs in a variety of PGx-related tasks for multiple identified stakeholders: patients, clinicians, and researchers. PGxQA consists of a large corpus of PGx questions generated directly from CPIC data resources, CPIC PGx guidance for Level A drug-gene pairs, or provided by experts in the field. In addition, PGxQA includes tools for higher throughput manual and automated evaluation of accuracy and completeness. PGxQA’s question set covers all of the CPIC Level A guidelines across several dimensions, such as translating genotypes into phenotypes, naming the dbSNP ID(s) for variant(s) that define a particular star-allele, and most importantly, translating phenotypes into clinical recommendations. These resources will help promote the responsible development of medical chatbots by allowing us to assess their knowledge of PGx topics, thus lowering barriers to implementation of PGx in the clinic and providing easier access to PGx knowledge for clinicians, patients, and researchers. Methods 2.1. Automated Question Generation To generate a meaningfully large corpus of evaluation questions, a significant proportion of the question bank was generated using custom python scripts to extract relevant information from the ‘CPIC Data’ database from their GitHub repository and format the information as question-answer pairs. The psycopg2 package was used to load and query CPIC’s postgresql database and pandas was used to output tables of questions. - Due to a large degree of redundancy in questions and the potential for an over-weighting of pharmacogenes with many defined star alleles in our overall scoring, we implemented a subsetting tool which takes each set of questions and drops redundant questions to maintain roughly even proportions of questions based on which genes they cover and what answer choices they cover. All generated questions are available for download, such that users can run the entire set or generate custom subsets based on their own criteria. 2.3. LLM Querying To query the various studied LLMs, we wrote a set of python scripts to load in our questions and send them to a local or remote LLM server. We defined a universal base prompt for all LLMs to ensure that all LLMs are working with similar basic instructions. We used the ‘openai’ python package along with an OpenAI API key to remotely query GPT-3.5-turbo, GPT-4-turbo, and OpenAI’s latest model as of writing, GPT-4o. We were also able to use the ‘openai’ python interface to send queries to a locally hosted instance of the open-source LLM Llama 3. Lastly, we used the ‘requests’ library in python to connect to Google’s Generative Language REST API to query Gemini 1.5 Pro, Google’s flagship LLM product. We used our python code to query the LLMs with all of the questions in our subsets, outputting tables containing the original question, question metadata, the ground-truth reference answer, the LLM answer, and some automated scoring metrics. 2.3. Manual Question Generation 2.3.1. External Provided Questions While the structured information within the CPIC database allows us to cover a large proportion of the potential use cases for a PGx chatbot, we sought out real world sources of PGx questions to represent what information is being sought by actual clinicians, researchers, and patients. We acquired a set of questions sent to PharmGKB scientists from 2020-2024, containing queries about PGx and the PharmGKB scientists’ responses. Additionally, we obtained an anonymized set of questions and answers from Penn Medicine’s Pharmacogenetics Consult Service, which provided a rich source of clinician-centric questions on PGx testing, results interpretation, and other relevant queries. We manually pruned these datasets to stay within our scope of queries about PGx information retrieval and formatted them into tables as short answer questions for our LLMs. 2.3.2. Adversarial Questions To assess how the models perform when presented with incorrect information, insufficient information, or information outside of the scope of queries regarding PGx, we devised sets of structured adversarial questions. These queries were structured to be nearly identical to the question bank extracted directly from the CPIC database, with the exception of having extraneous or missing information. For these queries, we evaluate whether LLMs answer that sufficient information was not available to answer the question, scoring based on the rate of refusal to respond. We additionally ran the whole set of LLM queries, giving the LLMs the option to refuse to respond, as to compare refusal rates between standard and adversarial queries. 2.4. Automated LLM Metrics To rapidly score the large corpus of questions and reduce reliance on expert labor, we generated a set of automated scoring functions to directly measure or approximate the performance of the LLMs on each specific task. 2.4.1. Numeric Scoring For questions requiring a numeric answer, such as the allele frequency tests, LLMs were instructed to format their response as a number. We then parsed out this number and calculated the mean absolute deviation (defined as the mean of the differences) between the LLM answer and the reference answer for the entire question set. 2.4.2. Information Retrieval Scoring For questions where the task involved returning non-sentence information such as dbSNP IDs, gene symbols, or generic drug names, we instructed the LLMs to return the desired information in a predictable format that can be parsed using regular expressions or by splitting a defined delimiting character like ‘;’. For question sets where there are multiple values making up the answer (for example to list all of the drugs which have CPIC guidelines linked to a particular gene), performance was measured as precision and recall, where precision is the proportion of values in the LLM answer that are found in the reference answer, and recall is the proportion of values in the reference answer that were correctly included in the LLM answer. 2.4.3. Multiple Choice Scoring For question sets where the questions had a small finite set of possible answers, we constrained them to multiple choice, where the LLM was told to select the correct answer from a provided list of options, facilitating the process of detecting if the LLM answered correctly programmatically. For these queries, the accuracy of the LLM in identifying the correct response was computed as the proportion of answers that were correctly selected. 2.4.4. Automated Text Similarity Metrics In the case of short-answer questions where we wanted the LLMs to answer in one or two sentences, it is nontrivial to directly score the accuracy without human graders with the expertise to evaluate the answers, which presents a scalability issue. To roughly approximate human scoring, we computed automated text similarity metrics between the LLM answer and a human-written reference answer. Specifically, we compute the cosine similarity of the answers under different text embedding models as well as BERTScore using the microsoft/deberta-xlarge-mnli base model. We selected the model that most closely resembled human judgement by comparing the embedding scores’ concordance with human-scored answers. - We then calculated the “win-rate” of the LLM answers by looking at the percentage of answers where the LLM similarity score to the reference answer was higher than the LLM similarity score to a generic discordant answer. For example, if asked to make a clinical recommendation, where the correct answer is to avoid the drug and the discordant answer is to take the drug as normal, the LLM would “win” if its answer has higher similarity to the reference answer than the discordant answer. 2.5. Human Review of LLM Answers 2.5.1. Concordance with Automated Metrics To determine which text metric best captures the semantics of PGx recommendations, we manually reviewed a set of 77 short-answer questions and responses from GPT-4o. For each question, we manually annotated whether the LLM answer was closest to the ground truth reference answer, or an alternative response containing a discordant recommendation. Using these human labels as ground truth, we computed the F1 score of each text metric by classifying an example positive if the LLM-reference pair has the highest metric value among all LLM-response pairs. - We found that BERTScore Precision maximizes agreement with human judgment. 2.5.2. Subject Matter Expert Reviews We recruited 4 PGx experts to perform a granular manual review of a selected subset of short-answer LLM responses. For each question, reviewers were shown a human-written and LLM-generated response in randomized blinded order and asked to rate each answer on a five-point Likert scale along attributes of accuracy (i.e. "This response is clinically/scientifically accurate"), completeness (i.e. "This response contains all of the necessary information to address the question fully"), and safety (i.e. "This answer does not pose any danger to human health or safety). For each question, reviewers were also presented with the relevant CPIC guideline document. Ratings were collected using the open-source Data Annotator for Machine Learning tool , which was deployed on an AWS EC2 instance with a public IP address so that expert reviewers from around the country could easily work on the assigned scoring task or quit and return to the task later. 2.6. Data Analysis and Visualization The results of our various scoring approaches were analyzed in a Jupyter notebook with pandas, which is included in the GitHub repository for this project. , , All plots were generated using the matplotlib and seaborn python packages. , Automated Question Generation To generate a meaningfully large corpus of evaluation questions, a significant proportion of the question bank was generated using custom python scripts to extract relevant information from the ‘CPIC Data’ database from their GitHub repository and format the information as question-answer pairs. The psycopg2 package was used to load and query CPIC’s postgresql database and pandas was used to output tables of questions. - Due to a large degree of redundancy in questions and the potential for an over-weighting of pharmacogenes with many defined star alleles in our overall scoring, we implemented a subsetting tool which takes each set of questions and drops redundant questions to maintain roughly even proportions of questions based on which genes they cover and what answer choices they cover. All generated questions are available for download, such that users can run the entire set or generate custom subsets based on their own criteria. LLM Querying To query the various studied LLMs, we wrote a set of python scripts to load in our questions and send them to a local or remote LLM server. We defined a universal base prompt for all LLMs to ensure that all LLMs are working with similar basic instructions. We used the ‘openai’ python package along with an OpenAI API key to remotely query GPT-3.5-turbo, GPT-4-turbo, and OpenAI’s latest model as of writing, GPT-4o. We were also able to use the ‘openai’ python interface to send queries to a locally hosted instance of the open-source LLM Llama 3. Lastly, we used the ‘requests’ library in python to connect to Google’s Generative Language REST API to query Gemini 1.5 Pro, Google’s flagship LLM product. We used our python code to query the LLMs with all of the questions in our subsets, outputting tables containing the original question, question metadata, the ground-truth reference answer, the LLM answer, and some automated scoring metrics. Manual Question Generation 2.3.1. External Provided Questions While the structured information within the CPIC database allows us to cover a large proportion of the potential use cases for a PGx chatbot, we sought out real world sources of PGx questions to represent what information is being sought by actual clinicians, researchers, and patients. We acquired a set of questions sent to PharmGKB scientists from 2020-2024, containing queries about PGx and the PharmGKB scientists’ responses. Additionally, we obtained an anonymized set of questions and answers from Penn Medicine’s Pharmacogenetics Consult Service, which provided a rich source of clinician-centric questions on PGx testing, results interpretation, and other relevant queries. We manually pruned these datasets to stay within our scope of queries about PGx information retrieval and formatted them into tables as short answer questions for our LLMs. 2.3.2. Adversarial Questions To assess how the models perform when presented with incorrect information, insufficient information, or information outside of the scope of queries regarding PGx, we devised sets of structured adversarial questions. These queries were structured to be nearly identical to the question bank extracted directly from the CPIC database, with the exception of having extraneous or missing information. For these queries, we evaluate whether LLMs answer that sufficient information was not available to answer the question, scoring based on the rate of refusal to respond. We additionally ran the whole set of LLM queries, giving the LLMs the option to refuse to respond, as to compare refusal rates between standard and adversarial queries. External Provided Questions While the structured information within the CPIC database allows us to cover a large proportion of the potential use cases for a PGx chatbot, we sought out real world sources of PGx questions to represent what information is being sought by actual clinicians, researchers, and patients. We acquired a set of questions sent to PharmGKB scientists from 2020-2024, containing queries about PGx and the PharmGKB scientists’ responses. Additionally, we obtained an anonymized set of questions and answers from Penn Medicine’s Pharmacogenetics Consult Service, which provided a rich source of clinician-centric questions on PGx testing, results interpretation, and other relevant queries. We manually pruned these datasets to stay within our scope of queries about PGx information retrieval and formatted them into tables as short answer questions for our LLMs. Adversarial Questions To assess how the models perform when presented with incorrect information, insufficient information, or information outside of the scope of queries regarding PGx, we devised sets of structured adversarial questions. These queries were structured to be nearly identical to the question bank extracted directly from the CPIC database, with the exception of having extraneous or missing information. For these queries, we evaluate whether LLMs answer that sufficient information was not available to answer the question, scoring based on the rate of refusal to respond. We additionally ran the whole set of LLM queries, giving the LLMs the option to refuse to respond, as to compare refusal rates between standard and adversarial queries. Automated LLM Metrics To rapidly score the large corpus of questions and reduce reliance on expert labor, we generated a set of automated scoring functions to directly measure or approximate the performance of the LLMs on each specific task. 2.4.1. Numeric Scoring For questions requiring a numeric answer, such as the allele frequency tests, LLMs were instructed to format their response as a number. We then parsed out this number and calculated the mean absolute deviation (defined as the mean of the differences) between the LLM answer and the reference answer for the entire question set. 2.4.2. Information Retrieval Scoring For questions where the task involved returning non-sentence information such as dbSNP IDs, gene symbols, or generic drug names, we instructed the LLMs to return the desired information in a predictable format that can be parsed using regular expressions or by splitting a defined delimiting character like ‘;’. For question sets where there are multiple values making up the answer (for example to list all of the drugs which have CPIC guidelines linked to a particular gene), performance was measured as precision and recall, where precision is the proportion of values in the LLM answer that are found in the reference answer, and recall is the proportion of values in the reference answer that were correctly included in the LLM answer. 2.4.3. Multiple Choice Scoring For question sets where the questions had a small finite set of possible answers, we constrained them to multiple choice, where the LLM was told to select the correct answer from a provided list of options, facilitating the process of detecting if the LLM answered correctly programmatically. For these queries, the accuracy of the LLM in identifying the correct response was computed as the proportion of answers that were correctly selected. 2.4.4. Automated Text Similarity Metrics In the case of short-answer questions where we wanted the LLMs to answer in one or two sentences, it is nontrivial to directly score the accuracy without human graders with the expertise to evaluate the answers, which presents a scalability issue. To roughly approximate human scoring, we computed automated text similarity metrics between the LLM answer and a human-written reference answer. Specifically, we compute the cosine similarity of the answers under different text embedding models as well as BERTScore using the microsoft/deberta-xlarge-mnli base model. We selected the model that most closely resembled human judgement by comparing the embedding scores’ concordance with human-scored answers. - We then calculated the “win-rate” of the LLM answers by looking at the percentage of answers where the LLM similarity score to the reference answer was higher than the LLM similarity score to a generic discordant answer. For example, if asked to make a clinical recommendation, where the correct answer is to avoid the drug and the discordant answer is to take the drug as normal, the LLM would “win” if its answer has higher similarity to the reference answer than the discordant answer. Numeric Scoring For questions requiring a numeric answer, such as the allele frequency tests, LLMs were instructed to format their response as a number. We then parsed out this number and calculated the mean absolute deviation (defined as the mean of the differences) between the LLM answer and the reference answer for the entire question set. Information Retrieval Scoring For questions where the task involved returning non-sentence information such as dbSNP IDs, gene symbols, or generic drug names, we instructed the LLMs to return the desired information in a predictable format that can be parsed using regular expressions or by splitting a defined delimiting character like ‘;’. For question sets where there are multiple values making up the answer (for example to list all of the drugs which have CPIC guidelines linked to a particular gene), performance was measured as precision and recall, where precision is the proportion of values in the LLM answer that are found in the reference answer, and recall is the proportion of values in the reference answer that were correctly included in the LLM answer. Multiple Choice Scoring For question sets where the questions had a small finite set of possible answers, we constrained them to multiple choice, where the LLM was told to select the correct answer from a provided list of options, facilitating the process of detecting if the LLM answered correctly programmatically. For these queries, the accuracy of the LLM in identifying the correct response was computed as the proportion of answers that were correctly selected. Automated Text Similarity Metrics In the case of short-answer questions where we wanted the LLMs to answer in one or two sentences, it is nontrivial to directly score the accuracy without human graders with the expertise to evaluate the answers, which presents a scalability issue. To roughly approximate human scoring, we computed automated text similarity metrics between the LLM answer and a human-written reference answer. Specifically, we compute the cosine similarity of the answers under different text embedding models as well as BERTScore using the microsoft/deberta-xlarge-mnli base model. We selected the model that most closely resembled human judgement by comparing the embedding scores’ concordance with human-scored answers. - We then calculated the “win-rate” of the LLM answers by looking at the percentage of answers where the LLM similarity score to the reference answer was higher than the LLM similarity score to a generic discordant answer. For example, if asked to make a clinical recommendation, where the correct answer is to avoid the drug and the discordant answer is to take the drug as normal, the LLM would “win” if its answer has higher similarity to the reference answer than the discordant answer. Human Review of LLM Answers 2.5.1. Concordance with Automated Metrics To determine which text metric best captures the semantics of PGx recommendations, we manually reviewed a set of 77 short-answer questions and responses from GPT-4o. For each question, we manually annotated whether the LLM answer was closest to the ground truth reference answer, or an alternative response containing a discordant recommendation. Using these human labels as ground truth, we computed the F1 score of each text metric by classifying an example positive if the LLM-reference pair has the highest metric value among all LLM-response pairs. - We found that BERTScore Precision maximizes agreement with human judgment. 2.5.2. Subject Matter Expert Reviews We recruited 4 PGx experts to perform a granular manual review of a selected subset of short-answer LLM responses. For each question, reviewers were shown a human-written and LLM-generated response in randomized blinded order and asked to rate each answer on a five-point Likert scale along attributes of accuracy (i.e. "This response is clinically/scientifically accurate"), completeness (i.e. "This response contains all of the necessary information to address the question fully"), and safety (i.e. "This answer does not pose any danger to human health or safety). For each question, reviewers were also presented with the relevant CPIC guideline document. Ratings were collected using the open-source Data Annotator for Machine Learning tool , which was deployed on an AWS EC2 instance with a public IP address so that expert reviewers from around the country could easily work on the assigned scoring task or quit and return to the task later. Concordance with Automated Metrics To determine which text metric best captures the semantics of PGx recommendations, we manually reviewed a set of 77 short-answer questions and responses from GPT-4o. For each question, we manually annotated whether the LLM answer was closest to the ground truth reference answer, or an alternative response containing a discordant recommendation. Using these human labels as ground truth, we computed the F1 score of each text metric by classifying an example positive if the LLM-reference pair has the highest metric value among all LLM-response pairs. - We found that BERTScore Precision maximizes agreement with human judgment. Subject Matter Expert Reviews We recruited 4 PGx experts to perform a granular manual review of a selected subset of short-answer LLM responses. For each question, reviewers were shown a human-written and LLM-generated response in randomized blinded order and asked to rate each answer on a five-point Likert scale along attributes of accuracy (i.e. "This response is clinically/scientifically accurate"), completeness (i.e. "This response contains all of the necessary information to address the question fully"), and safety (i.e. "This answer does not pose any danger to human health or safety). For each question, reviewers were also presented with the relevant CPIC guideline document. Ratings were collected using the open-source Data Annotator for Machine Learning tool , which was deployed on an AWS EC2 instance with a public IP address so that expert reviewers from around the country could easily work on the assigned scoring task or quit and return to the task later. Data Analysis and Visualization The results of our various scoring approaches were analyzed in a Jupyter notebook with pandas, which is included in the GitHub repository for this project. , , All plots were generated using the matplotlib and seaborn python packages. , Results 3.1. The PGxQA Question Corpus In total, the PGxQA question corpus consists of 110,207 questions covering different areas of PGx. While we subsequently present our own tools for querying and evaluating LLMs using this expansive dataset, we make available the entire set of questions as a resource agnostic of downstream evaluation approach. We detail the question types covered in . 3.2. Automated Performance Metric Results 3.2.1. Quantitative or Categorical Responses OpenAI’s GPT models almost universally performed better than Llama or Gemini on numeric, information retrieval, and multiple-choice query metrics . In particular, GPT-4o, outperformed or was in second place for nearly every metric. However, overall performance varied widely across question categories, with models performing worse at Allele Definition, Allele Function, Diplotype to Phenotype, and Phenotype to Category questions than the other question categories. Performances of less than 0.5 for most metrics and LLMs indicate that allele-related questions were more likely to lead to incorrect answers, potentially because allele definitions are dependent on contextual information such as genes. This potentially highlights that LLM training data or approaches may not properly encode allele information, particularly if they do not incorporate tabular data like the CPIC allele tables. Additionally, the number of star alleles has grown massively as new variants and combinations of variants are discovered. Limited references to these alleles in scientific literature likely contribute to poor performance, since LLMs primarily draw from natural language and at baseline struggle with tabular data. In contrast, other categories saw stronger performance such as the “Genes to drugs” or “Drugs to genes” categories, particularly in the average recall of the LLMs in identifying the expected entities. This indicates that entities such as drugs and genes, which have been described in text for much longer, and across a wider variety of sources, may be better encoded within the LLM weights. However, the precision in these categories was lacking for several LLMs, indicating that such LLMs may be prone to so-called “hallucinations” when responding to these questions, or may make claims backed up by inconclusive evidence. 3.2.2. Short Answer Responses After comparing each text embedding method to human classification results, the BERTScore Precision metric was the most concordant with human similarity assessments in indicating which of several reference answers the GPT-4o-generated response was the most concordant with . - Because this metric seemed the closest to capturing human judgment on a broad scale, we used it as an automated scoring proxy for LLM performance on our short answer “Phenotype to guideline” tests. Based on automated tests, GPT-4-turbo slightly outperformed GPT-3.5-turbo, GPT-4o, and Llama 3 in average win rate as defined in the methods . However, Gemini-Pro seems to greatly underperform relative to its counterparts, having an average win rate roughly 0.15 lower than the other models, indicating that its answers likely significantly diverged from the other models and from the ground truth reference. 3.2.3. Refusal Assessment When given the option to refuse to respond, LLMs had highly variable rates of refusal on misspecified and properly specified questions (where misspecified refers to questions where there is not sufficient information to answer, or there exist no clinical guidelines for the requested information). Ideally, a medical chatbot should refuse to answer misspecified questions (a refusal rate of 1 is best) and answer properly specified questions (a refusal rate of 0 is best). Llama, Gemini, and GPT3.5 all refused to answer both types of questions at roughly equal rates. Llama and Gemini tended to refuse very infrequently (<0.2 refusal rate) in either circumstance, while GPT-3.5 refused at roughly equal rates for both circumstances (~0.3 refusal rate) . A low refusal rate for misspecified queries might indicate a higher tendency to hallucinate information when given confusing or contradictory queries. In contrast, GPT-4 and GPT-4o showed a higher rate of refusal for misspecified questions (~0.7) compared to properly specified questions (~0.3), indicating that these two models exhibit ability to identify questions with incorrect information as well as a propensity to avoid hallucinations, though there remains significant room for improvement. These results are further broken down in , which shows the refusal rates for different categories. 3.3. LLM Results with Human Scoring 3.3.1. Manual LLM Metrics Although the emphasis of this work is on large scale benchmarks that can be employed widely, even in settings where manual expert review would be intractable, it is undeniable that expert reviewers provide invaluable understanding of the nuances and details of PGx which cannot easily be measured by automated scorers and text similarity scores. We recruited 4 PGx experts to manually score a set of GPT-4o responses to 15 short answer questions, and had those same experts score the human-written reference answers. On average, GPT-4o performed lower than the reference answer in all categories, with ‘Accuracy’ having the largest gap . While these results reflect that GPT-4o performed well for many questions, there were some answers where it provided highly incorrect or even dangerous responses, such as when it gave incorrect recommendations on tacrolimus PGx in the context of liver transplant. The PGxQA Question Corpus In total, the PGxQA question corpus consists of 110,207 questions covering different areas of PGx. While we subsequently present our own tools for querying and evaluating LLMs using this expansive dataset, we make available the entire set of questions as a resource agnostic of downstream evaluation approach. We detail the question types covered in . Automated Performance Metric Results 3.2.1. Quantitative or Categorical Responses OpenAI’s GPT models almost universally performed better than Llama or Gemini on numeric, information retrieval, and multiple-choice query metrics . In particular, GPT-4o, outperformed or was in second place for nearly every metric. However, overall performance varied widely across question categories, with models performing worse at Allele Definition, Allele Function, Diplotype to Phenotype, and Phenotype to Category questions than the other question categories. Performances of less than 0.5 for most metrics and LLMs indicate that allele-related questions were more likely to lead to incorrect answers, potentially because allele definitions are dependent on contextual information such as genes. This potentially highlights that LLM training data or approaches may not properly encode allele information, particularly if they do not incorporate tabular data like the CPIC allele tables. Additionally, the number of star alleles has grown massively as new variants and combinations of variants are discovered. Limited references to these alleles in scientific literature likely contribute to poor performance, since LLMs primarily draw from natural language and at baseline struggle with tabular data. In contrast, other categories saw stronger performance such as the “Genes to drugs” or “Drugs to genes” categories, particularly in the average recall of the LLMs in identifying the expected entities. This indicates that entities such as drugs and genes, which have been described in text for much longer, and across a wider variety of sources, may be better encoded within the LLM weights. However, the precision in these categories was lacking for several LLMs, indicating that such LLMs may be prone to so-called “hallucinations” when responding to these questions, or may make claims backed up by inconclusive evidence. 3.2.2. Short Answer Responses After comparing each text embedding method to human classification results, the BERTScore Precision metric was the most concordant with human similarity assessments in indicating which of several reference answers the GPT-4o-generated response was the most concordant with . - Because this metric seemed the closest to capturing human judgment on a broad scale, we used it as an automated scoring proxy for LLM performance on our short answer “Phenotype to guideline” tests. Based on automated tests, GPT-4-turbo slightly outperformed GPT-3.5-turbo, GPT-4o, and Llama 3 in average win rate as defined in the methods . However, Gemini-Pro seems to greatly underperform relative to its counterparts, having an average win rate roughly 0.15 lower than the other models, indicating that its answers likely significantly diverged from the other models and from the ground truth reference. 3.2.3. Refusal Assessment When given the option to refuse to respond, LLMs had highly variable rates of refusal on misspecified and properly specified questions (where misspecified refers to questions where there is not sufficient information to answer, or there exist no clinical guidelines for the requested information). Ideally, a medical chatbot should refuse to answer misspecified questions (a refusal rate of 1 is best) and answer properly specified questions (a refusal rate of 0 is best). Llama, Gemini, and GPT3.5 all refused to answer both types of questions at roughly equal rates. Llama and Gemini tended to refuse very infrequently (<0.2 refusal rate) in either circumstance, while GPT-3.5 refused at roughly equal rates for both circumstances (~0.3 refusal rate) . A low refusal rate for misspecified queries might indicate a higher tendency to hallucinate information when given confusing or contradictory queries. In contrast, GPT-4 and GPT-4o showed a higher rate of refusal for misspecified questions (~0.7) compared to properly specified questions (~0.3), indicating that these two models exhibit ability to identify questions with incorrect information as well as a propensity to avoid hallucinations, though there remains significant room for improvement. These results are further broken down in , which shows the refusal rates for different categories. Quantitative or Categorical Responses OpenAI’s GPT models almost universally performed better than Llama or Gemini on numeric, information retrieval, and multiple-choice query metrics . In particular, GPT-4o, outperformed or was in second place for nearly every metric. However, overall performance varied widely across question categories, with models performing worse at Allele Definition, Allele Function, Diplotype to Phenotype, and Phenotype to Category questions than the other question categories. Performances of less than 0.5 for most metrics and LLMs indicate that allele-related questions were more likely to lead to incorrect answers, potentially because allele definitions are dependent on contextual information such as genes. This potentially highlights that LLM training data or approaches may not properly encode allele information, particularly if they do not incorporate tabular data like the CPIC allele tables. Additionally, the number of star alleles has grown massively as new variants and combinations of variants are discovered. Limited references to these alleles in scientific literature likely contribute to poor performance, since LLMs primarily draw from natural language and at baseline struggle with tabular data. In contrast, other categories saw stronger performance such as the “Genes to drugs” or “Drugs to genes” categories, particularly in the average recall of the LLMs in identifying the expected entities. This indicates that entities such as drugs and genes, which have been described in text for much longer, and across a wider variety of sources, may be better encoded within the LLM weights. However, the precision in these categories was lacking for several LLMs, indicating that such LLMs may be prone to so-called “hallucinations” when responding to these questions, or may make claims backed up by inconclusive evidence. Short Answer Responses After comparing each text embedding method to human classification results, the BERTScore Precision metric was the most concordant with human similarity assessments in indicating which of several reference answers the GPT-4o-generated response was the most concordant with . - Because this metric seemed the closest to capturing human judgment on a broad scale, we used it as an automated scoring proxy for LLM performance on our short answer “Phenotype to guideline” tests. Based on automated tests, GPT-4-turbo slightly outperformed GPT-3.5-turbo, GPT-4o, and Llama 3 in average win rate as defined in the methods . However, Gemini-Pro seems to greatly underperform relative to its counterparts, having an average win rate roughly 0.15 lower than the other models, indicating that its answers likely significantly diverged from the other models and from the ground truth reference. Refusal Assessment When given the option to refuse to respond, LLMs had highly variable rates of refusal on misspecified and properly specified questions (where misspecified refers to questions where there is not sufficient information to answer, or there exist no clinical guidelines for the requested information). Ideally, a medical chatbot should refuse to answer misspecified questions (a refusal rate of 1 is best) and answer properly specified questions (a refusal rate of 0 is best). Llama, Gemini, and GPT3.5 all refused to answer both types of questions at roughly equal rates. Llama and Gemini tended to refuse very infrequently (<0.2 refusal rate) in either circumstance, while GPT-3.5 refused at roughly equal rates for both circumstances (~0.3 refusal rate) . A low refusal rate for misspecified queries might indicate a higher tendency to hallucinate information when given confusing or contradictory queries. In contrast, GPT-4 and GPT-4o showed a higher rate of refusal for misspecified questions (~0.7) compared to properly specified questions (~0.3), indicating that these two models exhibit ability to identify questions with incorrect information as well as a propensity to avoid hallucinations, though there remains significant room for improvement. These results are further broken down in , which shows the refusal rates for different categories. LLM Results with Human Scoring 3.3.1. Manual LLM Metrics Although the emphasis of this work is on large scale benchmarks that can be employed widely, even in settings where manual expert review would be intractable, it is undeniable that expert reviewers provide invaluable understanding of the nuances and details of PGx which cannot easily be measured by automated scorers and text similarity scores. We recruited 4 PGx experts to manually score a set of GPT-4o responses to 15 short answer questions, and had those same experts score the human-written reference answers. On average, GPT-4o performed lower than the reference answer in all categories, with ‘Accuracy’ having the largest gap . While these results reflect that GPT-4o performed well for many questions, there were some answers where it provided highly incorrect or even dangerous responses, such as when it gave incorrect recommendations on tacrolimus PGx in the context of liver transplant. Manual LLM Metrics Although the emphasis of this work is on large scale benchmarks that can be employed widely, even in settings where manual expert review would be intractable, it is undeniable that expert reviewers provide invaluable understanding of the nuances and details of PGx which cannot easily be measured by automated scorers and text similarity scores. We recruited 4 PGx experts to manually score a set of GPT-4o responses to 15 short answer questions, and had those same experts score the human-written reference answers. On average, GPT-4o performed lower than the reference answer in all categories, with ‘Accuracy’ having the largest gap . While these results reflect that GPT-4o performed well for many questions, there were some answers where it provided highly incorrect or even dangerous responses, such as when it gave incorrect recommendations on tacrolimus PGx in the context of liver transplant. Discussion This work provides a framework and dataset to evaluate LLM-based chatbots in their ability to answer PGx questions derived from gold-standard PGx data sources. In demonstrating our framework, we have highlighted the strengths and weaknesses of LLMs in handling a wide range of PGx queries, providing guidance for future improvements. 4.1. Avenues for Improving LLMs The main limitations we identified in LLM-based chatbots are their especially poor accuracy for queries requesting numeric answers as well as newer or less common star alleles, their tendency to invent false information instead of refusing to answer unknown queries, and their inability to understand the quality of the underlying sources of their claims. These are broader issues in LLM research, and many techniques have been employed to address them. Prompt-engineering involves devising specific prompts to elicit more comprehensive, more accurate, and better-worded responses from LLMs, which is inexpensive and requires minimal technical expertise, making it highly accessible. However, its ability to enhance results is limited, and excessive engineering can lead to increased token usage per query, potentially raising costs and complexity in processing time. , This approach was employed in many of the structured answer questions in PGxQA and yielded more concise and readily usable information. Fine-tuning LLMs on specific datasets of PGx questions, such as those generated in this study, presents an opportunity for models to better understand and respond to domain-specific queries. This approach has been shown to improve the relevance and accuracy of LLM responses. Although fine-tuning can be expensive, requiring significant computational resources like GPUs to train and update the model, it provides a tailored solution for domain-specific prompts. However, fine-tuned models can still hallucinate, as they rely on pre-trained embeddings. Retrieval augmented generation (RAG) incorporates a retrieval mechanism into LLMs, enabling the model to directly source information from an updated knowledge base. This approach is relatively cheap and straightforward to maintain, as updating the knowledge base is less resourceintensive compared to training the LLM itself. This is ideal for domains such as PGx, where knowledge bases are constantly updated. This also reduces the risk of hallucinations by providing the model with direct access to accurate data sources. However, RAG systems require large context windows for effective querying and a higher degree of human intervention is involved to teach the LLM how to access and utilize these external sources. , To address the needs efforts are underway by the PharmGKB/CPIC group at Stanford to create AI-ready data for consumption by LLMs. In addition, collaborative efforts are underway by Dr. Roxana Daneshjou and Dr. Klein’s groups at Stanford to develop both clinician-forward and patient-forward tools using generative AI to disseminate this knowledge on the current PharmGKB website and in the future, in the ClinPGx resource. 4.2. Limitations of PGxQA PGxQA is intended to be a framework for initial evaluation of a chatbot in answering PGx questions, particularly in answering questions concordant with pre-existing guidelines (such as information from CPIC, PharmGKB, and others). As shown above, PGxQA provides a variety of metrics that provide insight into several dimensions of the performance of LLMs. However, it is important to recognize that PGxQA has several limitations due to the way that it was devised and developed with a focus on automated assessment. First, the questions in PGxQA are largely created automatically from public PGx data sources. Most questions are query-based—requesting information that would require looking up information from one database and not synthesizing knowledge across multiple databases or fields. This facilitates automated evaluation at the expense of being able to understand this dimension of LLMs, referred to as “multi-hop reasoning”. To mitigate this, handcrafted questions and actual questions asked of PGx researchers and clinicians are included through the “External Questions" category, though LLM responses to these questions cannot fully be assessed automatically. Our emphasis on automated scoring approaches, while valuable for large-scale evaluation, introduces other limitations as well. We engineered the prompts to instruct the LLM to return answers in our desired format to properly score responses for our information retrieval tasks, introducing a small possibility that asking for results in this strict format alters performance. As shown in the comparison between the clinical and researcher versions of our drug to genes questions, the LLMs do seem to have variable performance when similar questions are asked in different ways. However, this represents a weakness of LLMs that must also be studied prior to clinical use due to the heterogeneous nature of real-life queries. There are also limitations to our text-similarity-based scoring, as text embeddings do not fully capture the nuances of human judgment. Despite these compromises, we believe that PGxQA will still provide useful metrics for chatbot evaluation and we anticipate that future work may address many of the limitations of PGxQA and of LLM chatbots. 4.3. Future Directions Going forward, we expect PGxQA to serve as an automatic evaluation framework to continually evaluate LLMs. This initial evaluation has shown dramatic improvements in performance in more recent models, such as GPT-4o, relative to older iterations such as GPT3.5. We anticipate that further advancements in model architecture and training will strengthen the ability of these models to function as a valuable resource in PGx. Using PGxQA, we can continually monitor improvements in LLM performance and assess new technologies as they are unveiled. The automatic generation of questions from the CPIC database, which is routinely updated, will also ensure that LLMs are updated with the latest information and clinical guidelines. The metrics presented in PGxQA will be continually refined to best reflect the latest evidence. As PGx is a continually evolving area of study, it is essential to have a scalable framework for ongoing evaluation to ensure that model improvements translate into tangible benefits for the field in terms of accuracy and relevance. The future of PGx chatbots holds significant promise as LLMs become increasingly integrated into healthcare settings to provide clinical recommendations and support. These chatbots will be able to use large quantities of PGx literature and evidence to strengthen and personalize their responses to clinician, patient, and researcher queries. The development of advanced LLMs, coupled with emerging techniques like RAG, will help ensure that PGx chatbots can reliably provide personalized and accurate evidence-based guidance regarding medication intake and dosage. However, the future of these chatbots depends on rigorous continual assessment of their performance. The resources developed in PGxQA represent a first-in-class approach to guide automated LLM evaluation, prioritizing accuracy, completeness, and safety for PGx chatbots. Avenues for Improving LLMs The main limitations we identified in LLM-based chatbots are their especially poor accuracy for queries requesting numeric answers as well as newer or less common star alleles, their tendency to invent false information instead of refusing to answer unknown queries, and their inability to understand the quality of the underlying sources of their claims. These are broader issues in LLM research, and many techniques have been employed to address them. Prompt-engineering involves devising specific prompts to elicit more comprehensive, more accurate, and better-worded responses from LLMs, which is inexpensive and requires minimal technical expertise, making it highly accessible. However, its ability to enhance results is limited, and excessive engineering can lead to increased token usage per query, potentially raising costs and complexity in processing time. , This approach was employed in many of the structured answer questions in PGxQA and yielded more concise and readily usable information. Fine-tuning LLMs on specific datasets of PGx questions, such as those generated in this study, presents an opportunity for models to better understand and respond to domain-specific queries. This approach has been shown to improve the relevance and accuracy of LLM responses. Although fine-tuning can be expensive, requiring significant computational resources like GPUs to train and update the model, it provides a tailored solution for domain-specific prompts. However, fine-tuned models can still hallucinate, as they rely on pre-trained embeddings. Retrieval augmented generation (RAG) incorporates a retrieval mechanism into LLMs, enabling the model to directly source information from an updated knowledge base. This approach is relatively cheap and straightforward to maintain, as updating the knowledge base is less resourceintensive compared to training the LLM itself. This is ideal for domains such as PGx, where knowledge bases are constantly updated. This also reduces the risk of hallucinations by providing the model with direct access to accurate data sources. However, RAG systems require large context windows for effective querying and a higher degree of human intervention is involved to teach the LLM how to access and utilize these external sources. , To address the needs efforts are underway by the PharmGKB/CPIC group at Stanford to create AI-ready data for consumption by LLMs. In addition, collaborative efforts are underway by Dr. Roxana Daneshjou and Dr. Klein’s groups at Stanford to develop both clinician-forward and patient-forward tools using generative AI to disseminate this knowledge on the current PharmGKB website and in the future, in the ClinPGx resource. Limitations of PGxQA PGxQA is intended to be a framework for initial evaluation of a chatbot in answering PGx questions, particularly in answering questions concordant with pre-existing guidelines (such as information from CPIC, PharmGKB, and others). As shown above, PGxQA provides a variety of metrics that provide insight into several dimensions of the performance of LLMs. However, it is important to recognize that PGxQA has several limitations due to the way that it was devised and developed with a focus on automated assessment. First, the questions in PGxQA are largely created automatically from public PGx data sources. Most questions are query-based—requesting information that would require looking up information from one database and not synthesizing knowledge across multiple databases or fields. This facilitates automated evaluation at the expense of being able to understand this dimension of LLMs, referred to as “multi-hop reasoning”. To mitigate this, handcrafted questions and actual questions asked of PGx researchers and clinicians are included through the “External Questions" category, though LLM responses to these questions cannot fully be assessed automatically. Our emphasis on automated scoring approaches, while valuable for large-scale evaluation, introduces other limitations as well. We engineered the prompts to instruct the LLM to return answers in our desired format to properly score responses for our information retrieval tasks, introducing a small possibility that asking for results in this strict format alters performance. As shown in the comparison between the clinical and researcher versions of our drug to genes questions, the LLMs do seem to have variable performance when similar questions are asked in different ways. However, this represents a weakness of LLMs that must also be studied prior to clinical use due to the heterogeneous nature of real-life queries. There are also limitations to our text-similarity-based scoring, as text embeddings do not fully capture the nuances of human judgment. Despite these compromises, we believe that PGxQA will still provide useful metrics for chatbot evaluation and we anticipate that future work may address many of the limitations of PGxQA and of LLM chatbots. Future Directions Going forward, we expect PGxQA to serve as an automatic evaluation framework to continually evaluate LLMs. This initial evaluation has shown dramatic improvements in performance in more recent models, such as GPT-4o, relative to older iterations such as GPT3.5. We anticipate that further advancements in model architecture and training will strengthen the ability of these models to function as a valuable resource in PGx. Using PGxQA, we can continually monitor improvements in LLM performance and assess new technologies as they are unveiled. The automatic generation of questions from the CPIC database, which is routinely updated, will also ensure that LLMs are updated with the latest information and clinical guidelines. The metrics presented in PGxQA will be continually refined to best reflect the latest evidence. As PGx is a continually evolving area of study, it is essential to have a scalable framework for ongoing evaluation to ensure that model improvements translate into tangible benefits for the field in terms of accuracy and relevance. The future of PGx chatbots holds significant promise as LLMs become increasingly integrated into healthcare settings to provide clinical recommendations and support. These chatbots will be able to use large quantities of PGx literature and evidence to strengthen and personalize their responses to clinician, patient, and researcher queries. The development of advanced LLMs, coupled with emerging techniques like RAG, will help ensure that PGx chatbots can reliably provide personalized and accurate evidence-based guidance regarding medication intake and dosage. However, the future of these chatbots depends on rigorous continual assessment of their performance. The resources developed in PGxQA represent a first-in-class approach to guide automated LLM evaluation, prioritizing accuracy, completeness, and safety for PGx chatbots. Supplementary |
Cholecystokinin-B Receptor-Targeted Nanoparticle for Imaging and Detection of Precancerous Lesions in the Pancreas | f02fdc6f-2a83-43b2-a9e5-df7bcf74768f | 8698999 | Anatomy[mh] | The use of nanoparticles (NPs) to improve imaging of tissues and cancers has increased over the past decade since these agents exhibit improved sensitivity, penetration depth, and multi-modal capacity compared to small-molecule imaging agents . Although the uptake of NPs is greater in tumor tissues due to enhanced permeability and retention (EPR), these agents can be functionalized with targeting moieties to render the NPs tissue-specific and decrease off-site uptake and toxicity . For in vivo use, NPs can be designed both as imaging tools for the detection of cancers and also equipped with payloads to deliver therapy at the designated site, i.e., “theranostic” agents . Pancreatic cancer has a dismal prognosis , and with the current chemotherapeutic regimens, 5-year survival is only about 10% . One reason contributing to the poor outcome of this malignancy is the inability to diagnose pancreatic cancer in early or precancerous stages . The current radiographic imaging tools such as computerized tomography (CT) and magnetic resonance imaging (MRI) lack sensitivity and are limited to detecting tumors greater than 2 cm in size . Endoscopic ultrasound (EUS) is another approach that has been used in the clinic to evaluate pancreatic cysts or intraductal papillary mucinous neoplasms (IPMNs) for pancreatic cancer . However, only about 15% of pancreatic cancers arise from cysts; the majority (85%) of pancreatic cancers develop from a microscopic precursor lesion called pancreatic intraepithelial neoplasia (PanINs). Unfortunately, PanINs are not identified by the standard radiographic imaging or endoscopic techniques. Early stages of disease leading to the development of pancreatic carcinoma have been difficult to study in human subjects since the majority of those with pancreatic cancer have advanced disease at the time of presentation . A genetically engineered murine model that has conditional expression of an endogenous oncogenic KRAS (G12D) allele in the murine embryo has been established , and researchers have used variations of this model to study pancreatic carcinogenesis since this model has the same genetic and phenotypic features of human pancreatic cancer. In the current work, we used a variant of this murine model, LSL-Kras G12D/+ ; P48-Cre , that highly resembles pathogenesis in the human precancerous pancreas. This murine model progresses through advancing grades of premalignant lesions (PanINs 1, 2, and 3) allowing investigators to study pancreatic carcinogenesis and early-stage malignancy. Since PanIN-3 lesions are considered carcinoma in situ, researchers have been trying to develop tools to facilitate in the detection of these precancerous lesions in order to provide earlier intervention in order to decrease the dismal prognosis of pancreatic cancer. An analogy to the PanIN lesion is that of the adenomatous polyp in the colon that is detected by colonoscopy and removed to prevent colon cancer. If high-grade PanIN lesions could be detected and treated or removed before progressing to pancreatic cancer, the incidence of pancreatic cancer could decrease. We discovered that a G-protein coupled receptor called the cholecystokinin-B receptor (CCK-BR) is rare in the normal mouse and the normal human pancreas , but this receptor becomes over-expressed in PanIN lesions and is markedly over-expressed in pancreatic cancer . The major ligand for the CCK-BR, gastrin, is detected in the fetal pancreas but is silenced during gestation and not found in the adult pancreas. However, gastrin also becomes re-expressed by microRNA-27a in PanINs and in pancreatic cancer , where it stimulates growth by an autocrine mechanism . Downregulation of gastrin mRNA by RNA interference (RNAi) techniques inhibits the growth and metastasis of human pancreatic cancer . Hence, mechanisms to deliver RNA interference to silence gastrin expression could have therapeutic potential. Since gastrin and its receptor, the CCK-BR, become activated in precancerous PanINs and are over-expressed in pancreatic cancer, strategies to target this pathway have been investigated. Using a thiol-maleimide coupling reaction , we covalently bound the maleimide functionalized receptor-binding moiety of the gastrin-10 ligand to a thiol-functionalized polyethylene glycol-block-poly (L-lysine) (SH-PEG-PLL) polymer, rendering it target specific to the CCK-BR . The positively charged lysine polymer allowed electrostatic complexation with a negatively charged oligonucleotide to form the stable micelle, while also shielding the positive charge and eliminating toxicity. We previously demonstrated that this receptor-specific polyplex NP was selective in targeting the CCK-BR and delivering gastrin siRNA to successfully inhibit growth and metastases of human pancreatic tumors in mice . Several platforms are being developed that utilize target-specific NPs combined with a metal or dye or both (dual) for imaging in pancreatic cancer . Due to the lack of specific targets, early detection in pancreatic cancer has been problematic. A Plectin-1-targeted dual-modality NP carrying iron oxide has been described for the imaging of orthotopic pancreatic cancer . Another NP that targets galectin-1 combined with Fe 3 O 4 detected small subcutaneous BxPC-3 human pancreatic tumors . Han et al. developed a gadolinium ion-doped upconversion NP (UCNP) micelle that targets the epithelial cell adhesion molecule (EpCAM, also known as CD326) in pancreatic cancer xenografted to mice. Techniques to improve imaging sensitivity with positron emission tomography (PET) or single photon emission computerized tomography (SPECT) in combination with anatomical techniques, such as computerized tomography (CT), are also being developed. In this context, Benito and colleagues developed a single-chain polymer NP (SCPNs) that targets the somatostatin receptor on pancreatic cancer and loaded the particles with Gallium-67 for SPECT imaging in mice bearing xenografted pancreatic tumors. Unfortunately, all these above studies utilized mice with established pancreatic tumors either implanted orthotopically or subcutaneously and not cancer precursor lesions. Kelly et al. tested a Cy5.5 fluorescent plectin-1-targeted NP crosslinked with iron oxide in mutant KRAS mice with PanIN lesions by making a midline incision over the mouse pancreas and imaging by laser scanning microscopy. Unfortunately, this NP exhibited high biodistribution in the liver and kidney that would potentially limit its clinical utility. Otherwise, there is a paucity of the literature concerning the target-specific imaging of pancreatic PanIN lesions. Since improving the survival of pancreatic cancer will require early detection or identification of lesions before they become established cancer, the goal of this investigation was to analyze the ability of a fluorescent CCK-BR-targeted NP micelle as an imaging tool to detect precancerous PanIN lesions in vivo in the mutant KRAS mouse pancreas, before pancreatic cancer occurs. Our long-term goal includes the development of this NP for diagnosis and, perhaps in combination with gastrin siRNA or other payloads, for treatment of PanIN lesions and prevention of pancreatic cancer.
2.1. Synthesis of the CCK-B Receptor-Targeted Polyplex Nanoparticle In order to develop the targeted NP, a thiol functionalized polyethylene glycol-block-poly (L-lysine) (SH-PEG-PLL) polymer was synthesized as previously described . In brief, SH-PEG-PLL was synthesized from trityl-S-poly (ethylene glycol)- block -poly (L-lysine) (Tr-S-PEG-PLL) (average MW of 9700 g/mol) with PEG MW of 5000 g/mol; Tr-S-PEG-PLL was custom synthesized (Alamanda Polymers, Huntsville, AL, USA) by reducing with trifluoroacetic acid and triethylsilane (98:2 v / v ). To render the NP target-specific for the CCK-BR, we used a maleimide functionalized to the N-terminus of gastrin-10 peptide (Glu-Glu-Glu-Ala-Tyr-Gly-Trip-Met-Asp-Phe-NH 2 , MW 1426.48 g/mole) (custom synthesis by GenScript USA Inc., Piscataway, NJ, USA) to conjugate to thiol group on the polymer (SH-PEG-PLL) via Michael addition reaction at pH 7 in deoxygenated HEPES buffer (100 mM) under an inert atmosphere ( ). The resulting Ga-PEG-PLL was extensively purified using a PD-10 column and dialysis (spectrapor ® RC membrane, MW cut off 8–10 KD, (ThermoFisher Scientific, Waltham, MA, USA)) and fast phase liquid chromatography with a UV detector set at λ = 220 nm, a size exclusion column (HiPrep 16/60 Sephacryl S-500 HR, GE Healthcare Life sciences, Chicago, IL, USA), and mobile phase consisting of sodium phosphate buffer (pH 7.0 with 0.3 M NaCl), at a flow rate of 1 mL/min. The dual tagged polyplex micelle was prepared by mixing 1 mg/mL of the Ga-PEG-PLL with double stranded 5′ Alexa Fluor 647 and 5′ Alexa Fluor 488 fluorophore labeled oligonucleotides (Life Technologies, ThermoFisher Scientific, Waltham, MA, USA) at an N/P ratio of 5/1 (N/P ratio referring to the molecular ratio of cationic polylysine amines, ‘N’, to anionic RNA phosphates, ‘P’, in the polyplex). The nonspecific sequence of the Stealth Custom RNA, 5′ Alexa Fluor 647 was the following: sense: AGCUACACUAUCGAGCAAUUAACUU and anti-sense: AAGUUAAUUGCUCGAUAGUGUAGCU. The negative Control LOGC_a3N Custom RNA, 5′ Alexa Fluor 488 was proprietary but confirmed by NCBI Blast not to selectively inhibit specific RNA. 2.2. Characterization of the Polyplex Micelle Nanoparticle In Vitro The self-assembled targeted dual siRNA polyplex nanoparticles were characterized for hydrodynamic size distribution by dynamic light scattering (DLS) using photon correlation spectroscopy with an optical glass round cuvette with a diameter of 5 mm in a 3D LS Spectrometer (LS Instruments, Fribourg, Switzerland), equipped with a HeNe laser at 633 nm (25 °C, 633 nm laser, 90° scattering angle). Two methods were used to confirm co-assembly of the two fluorophore-labeled siRNAs complexed into the same polyplex micelle. Polyplex stability was accessed by treating with heparin. Heparin sulfates are glycosaminoglycans that are sulfated and have a strong negative charge that can be used to destabilize the electrostatic polyplex. The high negative charge density of heparin results in competitive displacement of siRNA from the polyplex. When run on a Tris-borate-EDTA (TBE) gel (Novex™ 20% TBE gels, catalog# EC63155BOX, ThermoFisher Scientific, Waltham, MA, USA), the neutral, intact polyplex with electrostatically complexed RNA remains in the loading wells at the top of the gel. However, upon RNA displacement from the polyplex by heparin, it is free to migrate into the gel and detected as bands. Dual-tagged RNA polyplex (2 µL of 10 µM siRNA) was added to 13 µL of nuclease free water and mixed with a pipette. A measurement of 5 µL of nuclease free water or heparin solution (15 mg/mL) (Sigma-Aldrich, Inc. St. Louis, MO, USA) was added to siRNA and polyplex and incubated for 10 min at room temperature (RT). Before analyzing the samples on a 20% TBE gel, 5 µL of RNA-loading dye was added to each sample and samples run for 1.5 h at 140 V in a XCell SureLock™ Mini-Cell electrophoresis system (ThermoFisher Scientific, Waltham, MA, USA), with Thermo Electron 3000–90 power supply (ThermoFisher Scientific). To visualize the RNA, the gel was stained with 1% SYBR gold (ThermoFisher Scientific) staining solution for 10 min. Other controls were also included, namely free siRNA with and without heparin, and polymer and heparin only samples. RNA bands were visualized by UV transilluminator of G:BOX gel documentation unit (Syngene, Frederick, MD, USA). The second procedure to investigate dual complexing of both siRNA fluorophores, Alexa Fluor 488 and Alexa Fluor 647 siRNA, in the same polyplex micelle included analysis of free NP solution in vitro and the incubation with wild-type human PANC-1 pancreatic cancer cells that express the CCK-BR. PANC-1 cells were obtained from ATCC and were cultured in Dulbecco’s modified Eagle medium with 10% fetal bovine serum. Cells were harvested from culture plates and centrifuged to pellet the cells. The cells were washed with phosphate-buffered saline (PBS) and suspended in dye-free Opti-MEM medium (ThermoFisher Scientific, Waltham, MA, USA) and then were plated into each well of 96-well plates at concentrations from 2 × 10 6 , 0.5 × 10 6 , 0.125 × 10 6 , and 0.0625 × 10 6 cells and no cells. The cells were then incubated in vitro with the dual-complexed polyplex micelle for 1 h. After incubation, the cells were washed to remove any of the free NP, resuspended in PBS, and plated into wells of a 96-well plate. Fluorescence intensity was measured using an IVIS Lumina III in Vivo Optical Imaging System IVIS (Perkin Elmer, Waltham, MA, USA) with images of the same cells acquired for Alexa Fluor 488 at (460–520; excitation/emission) and for Alexa Fluor 647 at (620–670; excitation/emission) in order to determine the fluorescence in the cells of each fluorophore independently. 2.3. Analysis of the CCK-B Receptor as a Target for the Polyplex Nanoparticle Wild-type (WT) human PANC-1 cells that express the CCK-BR and PANC-1 cells that were stably transfected to over-express the CCK-BR (CCK-BR-OE) as described were utilized to investigate the binding capacity of untargeted polyplex micelle or CCK-BR target-specific polyplex. PANC-1 WT and CCK-BR-OE cells were harvested, centrifuged, washed, and suspended in Opti-MEM medium. Cells were suspended in each well of 24-well plate at six various cellular concentrations ranging from 10 6 cells to 3.1 × 10 4 cells and then were incubated with either the targeted or untargeted dual Alexa-fluorophore polyplex micelle for 1 h in vitro. After the incubation period, cells were removed and washed in PBS to remove free fluorescent NPs. Each prospective treated cell group (10 6 cells to 3.1 × 10 4 ) was placed in a 96-well plate and imaged in the IVIS machine. PBS solution only was used as a negative control. Fluorescence of the cells in the far-red range was captured for each group of cells (targeted versus untargeted) and wild type versus CCK-BR-OE PANC-1 cells in the IVIS System at fluorescence 647 (620–670). 2.4. Breeding and Genotyping of Mice The mouse genomic KRAS locus was upstream of a modified exon 1 engineered to contain a c.35G>A nucleotide change resulting in a glycine to aspartate transition (G > D) in codon 12. This mutation is commonly found in human pancreatic adenocarcinoma, and expression of the mutated allele is achieved by interbreeding LSL-Kras G12D mice with animals that express Cre recombinase from the pancreatic-specific promoter, P48. LSL-Kras G12D/+ ; P48-Cre ( KC ) mice were bred and genotyped as previously described . Heterozygote breeders (male or female) were mated. This Kras allele is non-functional in its germline configuration; therefore, the mice are maintained by backcrossing heterozygous animals to C57BL/6. The usual litter size is approximately 8. At the time of weaning, mouse genotypes were determined by PCR analysis of tail DNA preparations. Tail biopsies were obtained from <3-week-old mice with <0.5 cm removed after topical application of ice cold ethanol for anesthesia. Approximately 1:4 pups will be genotype LSL-Kras G12D/+ ; P48-Cre and develop PanINs. Hence, approximately 136 mice were genotyped for us to have the N = 34 used in this investigation. 2.5. Administration and Imaging of KC Mice and Organs with Polyplex Nanoparticles All animal studies were performed in an ethical fashion under a protocol approved by the Georgetown University IACUC. In this transgenic model, PanINs begin to develop by 3 months and PanIN-3 lesions by 4–6 months, and cancer can occur by 8–10 months. The rationale for these cohorts is that by an age of 5 months, the mice with this LSL-Kras G12D/+ ; P48-Cre genotype will have developed PanINs of all three stages (PanIN 1, 2, and 3), but rarely pancreatic cancer. By an age of 10 months, early pancreatic cancer may be found histologically. Ten days before performing imaging in the IVIS System, mice were changed to purified alfalfa-free diet (ENVIGO, cat# TD.97184; Indianapolis, IN, USA) to decrease any auto-fluorescence from food in the far-red range. The fur from the mouse abdomen and mid back was shaved prior to imaging. The optimal time to harvest organs after injection of the polyplex micelle was determined by evaluating the fluorescent emission or fluorescent emission in the IVIS System in mice that ranged from 5–8 months of age. Mice were anesthetized with isoflurane and a baseline image was obtained using the IVIS Lumina III In Vivo Optical Imaging System. After baseline images were recorded, mice were injected with the CCK-BR target-specific NP in a 0.1 mL volume via an intraperitoneal injection, and fluorescent emission was measured with the epi-fluorescent 620–670 filter at 3, 4, 5, and 6 h and again 24 h after injection. Each mouse was injected 2–3 times over the period of one week after, allowing 24–48 h for clearance, and ( N = 12) mice were used for the uptake experiment. The peak fluorescent emission was determined to be 5 h, and this time interval was selected to harvest organs for ex vivo imaging in mice injected with targeted or untargeted NPs. Mice used for the ex vivo experiments and immunohistochemistry included mice of 4, 5, 6, 7, 8, and 10 months of age ( N = 16 mice). For controls, age-matched wild type C57BL/6 mice were injected with the targeted polyplex NP ( N = 3), and age-matched 5-month-old, 7-month old, and 8-month-old LSL-Kras G12D/+ ; P48-Cre mice were injected with untargeted polyplex ( N = 3 per group). All polyplex NPs (whether targeted or non-targeted) injected were complexed with both the Alexa Fluor 488-labeled siRNA for immunohistochemistry (IHC) and Alexa Fluor 647-labeled siRNA for imaging. Since 5 h was identified as the time of peak fluorescence after polyplex injection, this time was also selected for harvesting of the pancreas and other major organs histological analysis. Tissues were excised and imaged ex vivo in the IVIS System to compare targeted versus untargeted treated mice, followed by tissue fixation with 4% paraformaldehyde at room temperature for 18–24 h and paraffin embedding. 2.6. Immunohistochemistry for Detection of Alexa Fluor 488 Labeled Polyplex Nanoparticles in Tissues Immunohistochemistry to detect Alexa Fluor 488 was performed using an ImmPRESS ® HRP Horse Anti-Rabbit IgG PLUS Polymer Kit, Peroxidase (Vector Labs, Burlingame, CA, USA, Catalog #: MP-7451) on 5 μm tissue sections mounted on Fisherbrand™ Superfrost™ Plus Microscope Slides (ThermoFisher Scientific), which were dewaxed and rehydrated with double-distilled H 2 O. Heat-induced epitope retrieval (HIER) was performed by heating sections in 0.01% citraconic anhydride containing 0.05% Tween-20 in a pressure cooker set at 122–125 °C for 30 s. Slides were incubated with blocking buffer (TBS with 0.05% Tween-20 and 0.25% casein) for 10 min and then incubated with rabbit anti-Alexa Fluor 488 antibody (1:400; Cat. No. A-11094, Invitrogen) diluted in blocking buffer over night at 4 °C. Slides were washed in 1X TBS with 0.05% Tween-20 and endogenous peroxidases blocked using 1.5% ( v / v ) H 2 O 2 in TBS (pH 7.4) for 10 min. Slides were incubated with Rabbit Polink-1 HRP (Vector Labs) for 30 min at room temperature, washed, and incubated with Impact™ DAB (3,3′-diaminobenzidine; Vector Laboratories) for 2–5 min. Slides were washed in ddH 2 O, counterstained with hematoxylin, and mounted in Permount (ThermoFisher Scientific). Whole tissue sections were scanned at high magnification (200X) using the ScanScope AT2 System (Leica Biosystems, Buffalo Grove, IL, USA), yielding high-resolution data from the entire tissue section. Representative high-resolution images were extracted from these whole-tissue scans. Confirmatory immunohistochemical stains were performed with a rabbit monoclonal anti-PEG (Cat#ab51257; Abcam; Waltham, MA, USA) diluted at 1:1000 following heat-induced epitope retrieval in citrate buffer for 20 min. 2.7. Immunohistochemistry (IHC) for CCK-BR in KRAS Mouse Pancreas and Human Pancreas In order to confirm that the mouse PanINs that accumulated the CCK-BR-targeted NP indeed expressed CCK-BR, CCK-BR immunohistochemistry (IHC) was performed on tissue sections (5 µm) from 5-month-old mutant KRAS mouse pancreas. To investigate the clinical importance of our findings and the potential for this CCK-BR target-specific polyplex NP to be used as an imaging tool for early detection of high grade PanINs or pancreatic cancer in human subjects, CCK-BR IHC was also performed on a human pancreas tissue microarray (TMA) obtained from US Biomax, (Rockville, MD, USA, Cat. No. BIC14011b). The human pancreas tissue microarray contained 48 unstained cores of fresh frozen paraffin-embedded human pancreas tissues from normal pancreas obtained at autopsy and from subjects with various grades of PanINs. Of these, there were N = 5 subjects with PanIN grade 1, N = 6 with PanIN grade 2, N = 4 with only PanIN grade 3, and N = 8 with both PanIN grade 3 and cancer (Total = 12), and there were N = 8 sections from normal controls. The KRAS mouse pancreas tissue sections and the human tissue microarray were deparaffinized and subjected to antigen retrieval. The slides were washed in 1× PBS 3× for 2 min, and then blocking was performed according to the manufacturer instructions (Anti-goat HRP-DAB IHC Detection Kit; CTS008-NOV, Novus Biologicals, Centennial, CO, USA). The slide was then incubated with the primary antibody CCK-BR (Cat#Ab77077, Abcam) at 1:200 titer in PBS overnight at 4 °C. After rinsing, the slide was incubated with 1–3 drops of Biotinylated Secondary Antibody (Novus Biologicals) for 60 min. The slide was then treated with 1–3 drops of High Sensitivity Streptavidin conjugated to horse radish peroxidase (HSS-HRP) (Novus Biologicals) for 30 min and washed. Visualization was achieved by enzymatic conversion of the chromogenic substrate 3,3′ Diaminobenzidine (DAB) into a brown-colored precipitate by horseradish peroxidase (HRP) at the sites of CCK-BR localization. Images were scanned using an Aperio GT450 automated (Leica Biosystems, Buffalo Graove, IL, USA), high-capacity digital pathology slide scanner and images captured with software from Aperio Image Scope. The images were analyzed for intensity of CCK-BR staining using the public domain software ImageJ (NIH Image, Bethesda, MD, USA) and corrected for area of tissue examined. 2.8. Statistical Analysis For immunohistochemical comparisons between normal mouse pancreas tissues and PanINs, images were scanned using an Aperio GT450 machine and images captured ( N = 10 each grade) with software from Aperio Image Scope. CCK-BR IHC was analyzed by densitometry with Image-J software corrected for area of tissue examined. Statistical analysis was performed with PRISM GraphPad software with Bonferroni correction applied for multiple comparisons to control tissue. Fluorescent emission of PANC-1 cells were recorded by the IVIS instrument, and mean emission intensity values were normalized for each wavelength with emission intensity at 2 million cells as 100% and zero cells at 0 for each of the 647 and 488 wavelengths. This normalized the mean values for the fluorophore fluorescence efficiency and allowed direct comparison of the fluorescent emission/cell density response of both fluorophores.
In order to develop the targeted NP, a thiol functionalized polyethylene glycol-block-poly (L-lysine) (SH-PEG-PLL) polymer was synthesized as previously described . In brief, SH-PEG-PLL was synthesized from trityl-S-poly (ethylene glycol)- block -poly (L-lysine) (Tr-S-PEG-PLL) (average MW of 9700 g/mol) with PEG MW of 5000 g/mol; Tr-S-PEG-PLL was custom synthesized (Alamanda Polymers, Huntsville, AL, USA) by reducing with trifluoroacetic acid and triethylsilane (98:2 v / v ). To render the NP target-specific for the CCK-BR, we used a maleimide functionalized to the N-terminus of gastrin-10 peptide (Glu-Glu-Glu-Ala-Tyr-Gly-Trip-Met-Asp-Phe-NH 2 , MW 1426.48 g/mole) (custom synthesis by GenScript USA Inc., Piscataway, NJ, USA) to conjugate to thiol group on the polymer (SH-PEG-PLL) via Michael addition reaction at pH 7 in deoxygenated HEPES buffer (100 mM) under an inert atmosphere ( ). The resulting Ga-PEG-PLL was extensively purified using a PD-10 column and dialysis (spectrapor ® RC membrane, MW cut off 8–10 KD, (ThermoFisher Scientific, Waltham, MA, USA)) and fast phase liquid chromatography with a UV detector set at λ = 220 nm, a size exclusion column (HiPrep 16/60 Sephacryl S-500 HR, GE Healthcare Life sciences, Chicago, IL, USA), and mobile phase consisting of sodium phosphate buffer (pH 7.0 with 0.3 M NaCl), at a flow rate of 1 mL/min. The dual tagged polyplex micelle was prepared by mixing 1 mg/mL of the Ga-PEG-PLL with double stranded 5′ Alexa Fluor 647 and 5′ Alexa Fluor 488 fluorophore labeled oligonucleotides (Life Technologies, ThermoFisher Scientific, Waltham, MA, USA) at an N/P ratio of 5/1 (N/P ratio referring to the molecular ratio of cationic polylysine amines, ‘N’, to anionic RNA phosphates, ‘P’, in the polyplex). The nonspecific sequence of the Stealth Custom RNA, 5′ Alexa Fluor 647 was the following: sense: AGCUACACUAUCGAGCAAUUAACUU and anti-sense: AAGUUAAUUGCUCGAUAGUGUAGCU. The negative Control LOGC_a3N Custom RNA, 5′ Alexa Fluor 488 was proprietary but confirmed by NCBI Blast not to selectively inhibit specific RNA.
The self-assembled targeted dual siRNA polyplex nanoparticles were characterized for hydrodynamic size distribution by dynamic light scattering (DLS) using photon correlation spectroscopy with an optical glass round cuvette with a diameter of 5 mm in a 3D LS Spectrometer (LS Instruments, Fribourg, Switzerland), equipped with a HeNe laser at 633 nm (25 °C, 633 nm laser, 90° scattering angle). Two methods were used to confirm co-assembly of the two fluorophore-labeled siRNAs complexed into the same polyplex micelle. Polyplex stability was accessed by treating with heparin. Heparin sulfates are glycosaminoglycans that are sulfated and have a strong negative charge that can be used to destabilize the electrostatic polyplex. The high negative charge density of heparin results in competitive displacement of siRNA from the polyplex. When run on a Tris-borate-EDTA (TBE) gel (Novex™ 20% TBE gels, catalog# EC63155BOX, ThermoFisher Scientific, Waltham, MA, USA), the neutral, intact polyplex with electrostatically complexed RNA remains in the loading wells at the top of the gel. However, upon RNA displacement from the polyplex by heparin, it is free to migrate into the gel and detected as bands. Dual-tagged RNA polyplex (2 µL of 10 µM siRNA) was added to 13 µL of nuclease free water and mixed with a pipette. A measurement of 5 µL of nuclease free water or heparin solution (15 mg/mL) (Sigma-Aldrich, Inc. St. Louis, MO, USA) was added to siRNA and polyplex and incubated for 10 min at room temperature (RT). Before analyzing the samples on a 20% TBE gel, 5 µL of RNA-loading dye was added to each sample and samples run for 1.5 h at 140 V in a XCell SureLock™ Mini-Cell electrophoresis system (ThermoFisher Scientific, Waltham, MA, USA), with Thermo Electron 3000–90 power supply (ThermoFisher Scientific). To visualize the RNA, the gel was stained with 1% SYBR gold (ThermoFisher Scientific) staining solution for 10 min. Other controls were also included, namely free siRNA with and without heparin, and polymer and heparin only samples. RNA bands were visualized by UV transilluminator of G:BOX gel documentation unit (Syngene, Frederick, MD, USA). The second procedure to investigate dual complexing of both siRNA fluorophores, Alexa Fluor 488 and Alexa Fluor 647 siRNA, in the same polyplex micelle included analysis of free NP solution in vitro and the incubation with wild-type human PANC-1 pancreatic cancer cells that express the CCK-BR. PANC-1 cells were obtained from ATCC and were cultured in Dulbecco’s modified Eagle medium with 10% fetal bovine serum. Cells were harvested from culture plates and centrifuged to pellet the cells. The cells were washed with phosphate-buffered saline (PBS) and suspended in dye-free Opti-MEM medium (ThermoFisher Scientific, Waltham, MA, USA) and then were plated into each well of 96-well plates at concentrations from 2 × 10 6 , 0.5 × 10 6 , 0.125 × 10 6 , and 0.0625 × 10 6 cells and no cells. The cells were then incubated in vitro with the dual-complexed polyplex micelle for 1 h. After incubation, the cells were washed to remove any of the free NP, resuspended in PBS, and plated into wells of a 96-well plate. Fluorescence intensity was measured using an IVIS Lumina III in Vivo Optical Imaging System IVIS (Perkin Elmer, Waltham, MA, USA) with images of the same cells acquired for Alexa Fluor 488 at (460–520; excitation/emission) and for Alexa Fluor 647 at (620–670; excitation/emission) in order to determine the fluorescence in the cells of each fluorophore independently.
Wild-type (WT) human PANC-1 cells that express the CCK-BR and PANC-1 cells that were stably transfected to over-express the CCK-BR (CCK-BR-OE) as described were utilized to investigate the binding capacity of untargeted polyplex micelle or CCK-BR target-specific polyplex. PANC-1 WT and CCK-BR-OE cells were harvested, centrifuged, washed, and suspended in Opti-MEM medium. Cells were suspended in each well of 24-well plate at six various cellular concentrations ranging from 10 6 cells to 3.1 × 10 4 cells and then were incubated with either the targeted or untargeted dual Alexa-fluorophore polyplex micelle for 1 h in vitro. After the incubation period, cells were removed and washed in PBS to remove free fluorescent NPs. Each prospective treated cell group (10 6 cells to 3.1 × 10 4 ) was placed in a 96-well plate and imaged in the IVIS machine. PBS solution only was used as a negative control. Fluorescence of the cells in the far-red range was captured for each group of cells (targeted versus untargeted) and wild type versus CCK-BR-OE PANC-1 cells in the IVIS System at fluorescence 647 (620–670).
The mouse genomic KRAS locus was upstream of a modified exon 1 engineered to contain a c.35G>A nucleotide change resulting in a glycine to aspartate transition (G > D) in codon 12. This mutation is commonly found in human pancreatic adenocarcinoma, and expression of the mutated allele is achieved by interbreeding LSL-Kras G12D mice with animals that express Cre recombinase from the pancreatic-specific promoter, P48. LSL-Kras G12D/+ ; P48-Cre ( KC ) mice were bred and genotyped as previously described . Heterozygote breeders (male or female) were mated. This Kras allele is non-functional in its germline configuration; therefore, the mice are maintained by backcrossing heterozygous animals to C57BL/6. The usual litter size is approximately 8. At the time of weaning, mouse genotypes were determined by PCR analysis of tail DNA preparations. Tail biopsies were obtained from <3-week-old mice with <0.5 cm removed after topical application of ice cold ethanol for anesthesia. Approximately 1:4 pups will be genotype LSL-Kras G12D/+ ; P48-Cre and develop PanINs. Hence, approximately 136 mice were genotyped for us to have the N = 34 used in this investigation.
All animal studies were performed in an ethical fashion under a protocol approved by the Georgetown University IACUC. In this transgenic model, PanINs begin to develop by 3 months and PanIN-3 lesions by 4–6 months, and cancer can occur by 8–10 months. The rationale for these cohorts is that by an age of 5 months, the mice with this LSL-Kras G12D/+ ; P48-Cre genotype will have developed PanINs of all three stages (PanIN 1, 2, and 3), but rarely pancreatic cancer. By an age of 10 months, early pancreatic cancer may be found histologically. Ten days before performing imaging in the IVIS System, mice were changed to purified alfalfa-free diet (ENVIGO, cat# TD.97184; Indianapolis, IN, USA) to decrease any auto-fluorescence from food in the far-red range. The fur from the mouse abdomen and mid back was shaved prior to imaging. The optimal time to harvest organs after injection of the polyplex micelle was determined by evaluating the fluorescent emission or fluorescent emission in the IVIS System in mice that ranged from 5–8 months of age. Mice were anesthetized with isoflurane and a baseline image was obtained using the IVIS Lumina III In Vivo Optical Imaging System. After baseline images were recorded, mice were injected with the CCK-BR target-specific NP in a 0.1 mL volume via an intraperitoneal injection, and fluorescent emission was measured with the epi-fluorescent 620–670 filter at 3, 4, 5, and 6 h and again 24 h after injection. Each mouse was injected 2–3 times over the period of one week after, allowing 24–48 h for clearance, and ( N = 12) mice were used for the uptake experiment. The peak fluorescent emission was determined to be 5 h, and this time interval was selected to harvest organs for ex vivo imaging in mice injected with targeted or untargeted NPs. Mice used for the ex vivo experiments and immunohistochemistry included mice of 4, 5, 6, 7, 8, and 10 months of age ( N = 16 mice). For controls, age-matched wild type C57BL/6 mice were injected with the targeted polyplex NP ( N = 3), and age-matched 5-month-old, 7-month old, and 8-month-old LSL-Kras G12D/+ ; P48-Cre mice were injected with untargeted polyplex ( N = 3 per group). All polyplex NPs (whether targeted or non-targeted) injected were complexed with both the Alexa Fluor 488-labeled siRNA for immunohistochemistry (IHC) and Alexa Fluor 647-labeled siRNA for imaging. Since 5 h was identified as the time of peak fluorescence after polyplex injection, this time was also selected for harvesting of the pancreas and other major organs histological analysis. Tissues were excised and imaged ex vivo in the IVIS System to compare targeted versus untargeted treated mice, followed by tissue fixation with 4% paraformaldehyde at room temperature for 18–24 h and paraffin embedding.
Immunohistochemistry to detect Alexa Fluor 488 was performed using an ImmPRESS ® HRP Horse Anti-Rabbit IgG PLUS Polymer Kit, Peroxidase (Vector Labs, Burlingame, CA, USA, Catalog #: MP-7451) on 5 μm tissue sections mounted on Fisherbrand™ Superfrost™ Plus Microscope Slides (ThermoFisher Scientific), which were dewaxed and rehydrated with double-distilled H 2 O. Heat-induced epitope retrieval (HIER) was performed by heating sections in 0.01% citraconic anhydride containing 0.05% Tween-20 in a pressure cooker set at 122–125 °C for 30 s. Slides were incubated with blocking buffer (TBS with 0.05% Tween-20 and 0.25% casein) for 10 min and then incubated with rabbit anti-Alexa Fluor 488 antibody (1:400; Cat. No. A-11094, Invitrogen) diluted in blocking buffer over night at 4 °C. Slides were washed in 1X TBS with 0.05% Tween-20 and endogenous peroxidases blocked using 1.5% ( v / v ) H 2 O 2 in TBS (pH 7.4) for 10 min. Slides were incubated with Rabbit Polink-1 HRP (Vector Labs) for 30 min at room temperature, washed, and incubated with Impact™ DAB (3,3′-diaminobenzidine; Vector Laboratories) for 2–5 min. Slides were washed in ddH 2 O, counterstained with hematoxylin, and mounted in Permount (ThermoFisher Scientific). Whole tissue sections were scanned at high magnification (200X) using the ScanScope AT2 System (Leica Biosystems, Buffalo Grove, IL, USA), yielding high-resolution data from the entire tissue section. Representative high-resolution images were extracted from these whole-tissue scans. Confirmatory immunohistochemical stains were performed with a rabbit monoclonal anti-PEG (Cat#ab51257; Abcam; Waltham, MA, USA) diluted at 1:1000 following heat-induced epitope retrieval in citrate buffer for 20 min.
In order to confirm that the mouse PanINs that accumulated the CCK-BR-targeted NP indeed expressed CCK-BR, CCK-BR immunohistochemistry (IHC) was performed on tissue sections (5 µm) from 5-month-old mutant KRAS mouse pancreas. To investigate the clinical importance of our findings and the potential for this CCK-BR target-specific polyplex NP to be used as an imaging tool for early detection of high grade PanINs or pancreatic cancer in human subjects, CCK-BR IHC was also performed on a human pancreas tissue microarray (TMA) obtained from US Biomax, (Rockville, MD, USA, Cat. No. BIC14011b). The human pancreas tissue microarray contained 48 unstained cores of fresh frozen paraffin-embedded human pancreas tissues from normal pancreas obtained at autopsy and from subjects with various grades of PanINs. Of these, there were N = 5 subjects with PanIN grade 1, N = 6 with PanIN grade 2, N = 4 with only PanIN grade 3, and N = 8 with both PanIN grade 3 and cancer (Total = 12), and there were N = 8 sections from normal controls. The KRAS mouse pancreas tissue sections and the human tissue microarray were deparaffinized and subjected to antigen retrieval. The slides were washed in 1× PBS 3× for 2 min, and then blocking was performed according to the manufacturer instructions (Anti-goat HRP-DAB IHC Detection Kit; CTS008-NOV, Novus Biologicals, Centennial, CO, USA). The slide was then incubated with the primary antibody CCK-BR (Cat#Ab77077, Abcam) at 1:200 titer in PBS overnight at 4 °C. After rinsing, the slide was incubated with 1–3 drops of Biotinylated Secondary Antibody (Novus Biologicals) for 60 min. The slide was then treated with 1–3 drops of High Sensitivity Streptavidin conjugated to horse radish peroxidase (HSS-HRP) (Novus Biologicals) for 30 min and washed. Visualization was achieved by enzymatic conversion of the chromogenic substrate 3,3′ Diaminobenzidine (DAB) into a brown-colored precipitate by horseradish peroxidase (HRP) at the sites of CCK-BR localization. Images were scanned using an Aperio GT450 automated (Leica Biosystems, Buffalo Graove, IL, USA), high-capacity digital pathology slide scanner and images captured with software from Aperio Image Scope. The images were analyzed for intensity of CCK-BR staining using the public domain software ImageJ (NIH Image, Bethesda, MD, USA) and corrected for area of tissue examined.
For immunohistochemical comparisons between normal mouse pancreas tissues and PanINs, images were scanned using an Aperio GT450 machine and images captured ( N = 10 each grade) with software from Aperio Image Scope. CCK-BR IHC was analyzed by densitometry with Image-J software corrected for area of tissue examined. Statistical analysis was performed with PRISM GraphPad software with Bonferroni correction applied for multiple comparisons to control tissue. Fluorescent emission of PANC-1 cells were recorded by the IVIS instrument, and mean emission intensity values were normalized for each wavelength with emission intensity at 2 million cells as 100% and zero cells at 0 for each of the 647 and 488 wavelengths. This normalized the mean values for the fluorophore fluorescence efficiency and allowed direct comparison of the fluorescent emission/cell density response of both fluorophores.
3.1. Synthesis of the CCK-B Receptor-Targeted Polyplex Nanoparticle A polyplex micelle NP was developed to selectively target the CCK-BR expressed on PanIN lesions in the pancreas during pancreatic carcinogenesis from a thiol-functionalized polyethylene glycol-block-poly (L-lysine) (SH-PEG-PLL). The backbone was rendered specific to the CCK-BR by the conjugation of gastrin-10 peptide (Ga-10) to the polymer by a thiol–maleimide coupling reaction ( A). When the positively charged lysine backbone moiety of this NP was electrostatically complexed with negatively charged oligonucleotides, a self-assembled polyplex micelle formed ( B). For imaging purposes, the polyplex NP was complexed simultaneously with two separate oligonucleotides: custom RNA, 5′ Alexa Fluor 647 and 5′ Alexa Fluor 488, each at a final concentration of 480 nM. The Alexa Fluor 647-tagged RNA was selected for its fluorescent properties in the far-red wavelength range that allowed in vivo imaging of mutant KRAS mice with an IVIS Lumina III In Vivo Optical Imaging System for biodistribution evaluation. The Alexa Fluor 488-tagged RNA was also used in the polyplex micelle so that high resolution localization of the polyplex micelle in early PanIN lesions in the pancreas and other organs could be confirmed ex vivo by co-registration of anti-Alexa Fluor 488 immunohistochemistry and histopathology (i.e., H&E) images. 3.2. Characterization of the Polyplex Micelle Nanoparticle In Vitro After self-assembly, the dual tagged polyplex NPs were characterized for their hydrodynamic radius by dynamic light scattering (DLS) using photon correlation spectroscopy (PCS) as described . Each measurement was conducted in triplicate with a laser at a wavelength of 632.8 nm and a scattering angle of 90° for 20 s with the mean size distribution of 91.58 nm for the targeted polyplex micelle ( A). The polydispersity index (PDI) of the NP was ~0.32. 3.2.1. Heparin Displacement Assay A series of in vitro experiments were performed in order to confirm the self-assembly of this micelle NP and that the Alexa Fluor 647- and Alexa Fluor 488-tagged siRNA are co-assembled in the same polyplex. The principal of the heparin displacement assay is that negatively charged heparin is able to displace negatively charged dsRNA from the electrostatic complex with positively charged polylysine, resulting in a release of the complexed dsRNA. When run on a TBE gel, the neutral, intact polyplex with electrostatically complexed RNA remains in the loading wells at the top of the gel. However, upon RNA displacement from the polyplex by heparin, it is free to migrate into the gel and detected as bands. From the heparin displacement assay, it is evident that there is polyplex formation with inclusion of both fluorophore-tagged siRNAs ( B, lane 4). The absence of any RNA band in the polyplex sample without heparin suggests that polyplex is intact and contains no free RNA. When polyplex is disrupted by heparin treatment, it releases both RNAs that are visualized on the gel ( B, lane 5). Heparin and polymer-alone control samples did not stain with SYBR gold staining solution, as expected. Individual RNAs with and without heparin treatment showed the same bands. Note that the Alexa Fluor 488-labeled RNA fluoresces brightly in the UV transilluminator, and both single and double stranded RNA are observed, as well as some contaminating untagged RNA species from the tagged RNA synthesis, as is commonly observed. Based on the results of the heparin displacement assay, as expected, there is no free RNA in the polyplex sample. Therefore, all RNA added to form the polyplex is in the polyplex micelle (i.e., a concentration of 480 nM each). 3.2.2. Confirmation of Dual Fluorophore Labeling In Vitro Fluorescence emission was measured in the IVIS instrument using either the epi-fluorescent 460–520 filter for the measurement of Alexa Fluor 488 or the epi-fluorescent filter of 620–670 for the measurement of Alexa Fluor 647. The fluorescent emission of dual tagged polyplex NP solution imaged in the IVIS instrument demonstrated comparable fluorescent emission curves at increasing volumes when normalized for fluorescence efficiency of the fluorophores ( C). The uptake of the dual tagged polyplex NP was also performed in human PANC-1 pancreatic cancer cells to investigate the uptake of the polyplex micelle into cells and the micelle Alexa Fluor 647/Alexa Fluor 488-tagged siRNA co-assembly, with equal distribution of both Alexa Fluor 647 and Alexa Fluor 488 siRNAs. D reveals the normalized signal data from PANC-1 cells treated with the complexed micelle NP at various cell densities in vitro imaged in the IVIS at the 488 and 647 wavelengths. The intensity of the 488 and 647 fluorescent emission increases with the number of cells, demonstrating equivalent distribution of the Alexa Fluor 647-/Alexa Fluor 488-tagged siRNA in the formulated polyplex. These cells were washed to remove any free polyplex; thus, any fluorescence recorded was from intracellular uptake. These data support that there is equal complexing of the dual siRNA fluorophores in the NP. Furthermore, the normalized signal data demonstrate an equivalent fluorescent emission/cell density response for both fluorophores. 3.2.3. CCK-B Receptor-Targeted Nanoparticles Have Enhanced Uptake in Cells In order to demonstrate that selective targeting of the CCK-BR in pancreatic cancer improves uptake of the polyplex, we examined the fluorescent emission of wild type human PANC-1 cells and PANC-1 cells engineered to over-express the CCK-BR using NPs that were untargeted or targeted and using the epi-fluorescent filter of 620–670 for measurement of Alexa Fluor 647. Treating CCK-BR-expressing wild-type PANC-1 cells with the targeted polyplex micelle enhances uptake in comparison to the untargeted NPs ( E; rows A and B). We show in our previous experiments with the CCR-BR-targeted polyplex that the construct is internalized, based on gene knockdown and confocal microscopy studies with fluorescently tagged polyplex . PANC-1 cells transfected to over-express (OE) the CCK-BR ( E, rows D and E) have a marked increase in the uptake of the targeted polyplex NP compared to the same density of wild-type cells ( E; rows A and B). Targeting the polyplex micelle to CCK-BR enhances uptake of the NPs, as exhibited by the heightened fluorescent emission compared to the over-expressing cells treated with the untargeted NP. The mean ± SEM fluorescence is plotted for replicate treatments in wild-type and in CCK-BR over-expressing cells in F, showing that the untargeted wild-type cells have the least fluorescence, and the targeted over-expressing PANC-1 cells have the greatest fluorescence. Furthermore, the mean fluorescence was significantly increased in the PANC-1 cells treated with the targeted NPs compared to the untargeted NPs ( p < 0.001). 3.3. In Vivo Imaging of KC Mice with Targeted Fluorescent-Tagged Nanoparticles LSL-Kras G12D/+ ; P48-Cre ( KC ) mice from our genetically engineered animal colony were selected at ages ranging from four months of age, an age when high grade PanIN lesions begin to develop histologically , and up to ten months of age, when most PanIN lesions are typically grade 3 or carcinoma in situ. An initial experiment with sequential imaging over a 24 h period was performed to determine the optimal time for NP uptake in the pancreas after injection. Fluorescence in the pancreas was only visualized in the mice treated with the CCK-BR target-specific fluorescent polyplex NP. Images of a representative 5-month-old anesthetized mouse are shown at baseline and 3, 4, 5, 6, and 24 h post IP injection ( A–F) within the IVIS imaging system, demonstrating fluorescence in the far infrared range consistent with uptake of the Alexa Fluor 647-labeled polyplex localized in the area of the pancreas. The mean ± SEM for age-matched mice is plotted over time ( G), demonstrating that peak fluorescent intensity in the pancreas was reached 5 h after injection and was absent 24 h after injection. Of note, we previously found that 5 h was also the peak NP uptake in mice bearing orthotopic human pancreatic tumors using NPs tagged with the fluorophore Cy3 . Therefore, this 5 h time point was selected to ethically euthanize the mice after treatment and collect the pancreas and other organs to examine ex vivo for fluorescence and histology. With the same gain settings as the images acquired by IVIS in for the targeted construct, a representative 5-month-old KC mouse injected with untargeted NPs shows no specific, high intensity fluorescence in the mouse pancreas at 5 h post dose ( ). 3.4. Ex Vivo imaging of Organs from Mice Injected with Targeted or Untargeted Nanoparticles Organs excised at 5 h from an 8-month-old KRAS mouse treated with CCK-BR-targeted polyplex NPs are shown in A. Only the excised pancreas demonstrated positive fluorescence ex vivo in comparison to the other organs ( A). Ex vivo organs harvested from an age-matched KRAS mouse 5 h after being injected with untargeted nanoparticles do not reveal any fluorescence ( B). Ex vivo pancreata excised from 7-month-old KRAS mice are shown side-by-side ( C), demonstrating in another aged-matched group that the pancreas is fluorescent only in the mouse treated with the targeted NPs. Note that as the mouse increases in age (comparing the pancreas of the 8-month-old mouse, A, to that of the 7-month-old mouse, C), the fluorescence in the pancreas increases, correlating with the increased number of PanIN-3 lesions. There was no evidence of fluorescence identified in the pancreas of wild-type control mice injected with the targeted polyplex NP linked with the same fluorescent oligonucleotide probes ( D). A representative mouse pancreas and fluorescent emission are shown for mice aged 5, 6, 7, 8, and 10 months ( E). The intensity after NP injection increases in the ex vivo pancreas with the age of the mouse and corresponds to the increasing grade of PanINs. 3.5. Confirmation of Nanoparticle Uptake in PanINs by Immunohistochemistry In order to confirm that the fluorescent-labeled targeted polyplex NPs accumulated in CCK-BR-expressing pancreatic PanIN lesions and that there was limited off-target uptake in other organs, excised tissues were fixed and paraffin-embedded for immunohistochemistry (IHC) and hematoxylin and eosin (H&E) analysis. Immunochemistry analysis of Alexa Fluor 488 was chosen instead of fluorescence as it allows for the high-resolution correlation of polyplex distribution and H&E histological lesion grade in order to validate CCR-BR as a biomarker/target for early PanIN detection by the targeted polyplex. Fluorescence data, while supportive, would not have been as compelling for this purpose. Tissue sections from the pancreas and other major organs were examined by IHC with a selective rabbit anti-Alexa Fluor 488 antibody and then visualized with Rabbit Polink-1 horseradish peroxidase (HRP) staining. A representative low power image of a 10-month-old mouse pancreas stained with H&E revealed the characteristic pancreatic histology, with advanced PanIN lesions and surrounding fibrosis that occurs during pancreatic carcinogenesis ( A). Confirmation that the CCK-BR-targeted polyplex micelle NPs were distributed to the high grade PanIN lesions was demonstrated by IHC in the pancreatic tissue section stained with a selective Alexa Fluor 488 antibody ( B). Higher magnification of this 10-month-old mouse pancreas stained with H&E shows high grade PanIN-3 lesions ( C). The corresponding tissue shown in D stained with anti-Alexa Fluor 488 shows that the immunoreactivity was more intense in the high grade PanIN-3 lesions, with minimal staining in the earlier stage PanIN-1b and PanIN 2 lesions and an absence of staining in the normal pancreatic acinar cells. H&E stain of a KRAS pancreas from a 5-month-old mouse is shown at low magnification ( E) and at higher magnification ( G). The same tissues from the 5- month-old mice demonstrate positive immunoreactivity for Alexa Fluor 488 localizing to the high grade PanINs in F,H, respectively. Note the lack of staining in the normal pancreatic acinar cells and islet cells of the pancreas, confirming that the NP micelle is selectively targeting the PanIN epithelial cells that express the CCK-BR. This immunoreactivity and the intense fluorescence in the ex vivo pancreas confirm that the fluorescence identified in the living mice 5 h after intraperitoneal injection of the dual Alexa Fluor 488-/647-labeled polyplex NP was indeed the visualization of polyplex within the PanIN lesions of the mouse pancreas. These data are proof of principle that the CCK-BR-targeted NPs reach the mouse pancreas and are taken up into the abnormal precancerous epithelium 5 h after injection. 3.6. Immunoreactivity with Anti-PEG and Anti-CCK-BR Antibodies in 5-Month-Old KRAS Mouse Pancreas Localization of the CCK-BR-targeted polyplex NP in the pancreatic PanINs of mice by immunoreactivity to polyplex components was further confirmed with an antibody to polyethylene glycol (PEG). Similar to the immunohistochemical localization with the Alexa Fluor 488 antibody above, the PEG antibody showed increased staining in the mouse pancreas high grade PanIN lesions ( A,B). The normal pancreatic acinar cells that lack the CCK-BR did not exhibit any immunoreactivity to the PEG antibody ( B). Age-matched KC mice injected with untargeted polyplex NP and probed with the same PEG antibody did not reveal any immunoreactivity in the pancreas PanIN lesions ( C,D). Mouse pancreas from the KC mice that reacted with a CCK-BR selective antibody demonstrated CCK-BR expression in the epithelial cells of the high grade PanIN lesions ( E,F). These findings are confirmatory evidence that the fluorescence observed in the anesthetized mice by the IVIS imaging system upon targeted polyplex NP injection was indeed due to uptake of the NP in the CCK-BR-expressing precancerous PanIN lesions of the mouse pancreas. Furthermore, these findings support the specificity of the CCK-BR-targeted NP for PanIN tissue uptake and localization and the lack of specificity with the untargeted NP. 3.7. The Targeted Polyplex Nanoparticles Has Limited Off-Target Toxicity to Other Organs With the intention of confirming PanIN specificity and limited off-target uptake, other organs were also examined histologically and by anti-Alexa Fluor 488 and confirmed with anti-PEG immunohistochemical analysis. We were selective in the tissues that we analyzed for immunohistochemistry (IHC), and chose tissues based on ex vivo florescence or likelihood of polyplex accumulation based on the CCK expression pattern or prior knowledge of polyplex distribution. The colon and lung were not selected for both Alexa Fluor 488 and PEG IHC, since these tissues did not demonstrate an ex vivo fluorescent signal, do not have CCK receptors, and historically do not accumulate polyplex. However, we performed Alexa Fluor 488 IHC on lung and PEG IHC on colon to rule out potential polyplex accumulation. As expected, the findings were negative in these tissues. The immunohistochemistry of these other organ tissue sections with the Alexa Fluor 488 antibody was negative but showed some background and positive staining in the kidney tubule ( A). Immunoreactivity of the excised organs probed with an antibody for PEG ( B) lacks any staining, and no immunoreactivity was seen in the kidney. This difference in the immunohistochemistry images between the two antibodies suggests that the positive staining identified in the tissues probed with the Alexa Fluor 488 antibody represents staining from the free Alexa Fluor 488, such as after polyplex uptake, degradation, and excretion by the kidney. Slides from the Alexa Fluor 488 immunohistochemistry from three separate mouse pancreata were digitized with an Aperio ScanScope XT (Leica) at 200× in a single z-plane. Cell detection algorithms were run, and the staining intensity was scored using a scale of 0–3 as follows: 0 for no staining, 1 for mild staining, 2 for moderate staining, and 3 for strong staining. The percentage of cells staining positively is reported, and an H-score, which integrates percent positive and staining intensity, was calculated using QuPath as follows: H-score = [1 × (% cells 1+) + 2 × (% cells 2+) + 3 × (% cells 3+ )] H-score range = 0–300. The results of the H-score are shown in . These data support the negative ex vivo fluorescence and lack of NP uptake in the other organs visualized in the IVIS. Negative controls were run for each tissue by replacing the primary anti-PEG antibody with a nonspecific antibody reagent from the same host species and isotype (isotype control). Staining was considered specific when there was staining with the primary anti-PEG antibody and no staining in the isotype control or control untreated tissue. The positive and negative anti-PEG-stained control tissues are shown in . The pancreata from control C57BL/6 wild-type mice treated with CCK-BR-targeted NPs were negative for Alexa Fluor 488 immunoreactivity ( ). 3.8. Human PanIN Lesions Express CCK-B Receptors In order to show the clinical relevance and translational potential for use in human subjects, we examined human pancreas tissue using a commercial tissue microarray containing normal pancreas tissues and specimens with PanINs of various grades. This tissue array was stained with the CCK-B receptor antibody (as above in E,F), and we demonstrated the absence of CCK-BR immunoreactivity in normal human pancreas ( A, top left) and the presence of CCK-BR immunoreactivity in human PanINs with increasing grade, including PanIN-1, PanIN-2, and PanIN-3 lesions ( A). Immunoreactivity for the CCK-BR is negligible in the normal human pancreas, and the staining increases with increasing PanIN grade. Analysis of the CCK-BR immunoreactivity integrated density analyzed by computer software (ImageJ) is plotted for normal human pancreas tissue and increasing PanIN grade ( B). Integrated density is the sum of the values of the pixels in the image or selection. This is equivalent to the product of area and mean gray value.
A polyplex micelle NP was developed to selectively target the CCK-BR expressed on PanIN lesions in the pancreas during pancreatic carcinogenesis from a thiol-functionalized polyethylene glycol-block-poly (L-lysine) (SH-PEG-PLL). The backbone was rendered specific to the CCK-BR by the conjugation of gastrin-10 peptide (Ga-10) to the polymer by a thiol–maleimide coupling reaction ( A). When the positively charged lysine backbone moiety of this NP was electrostatically complexed with negatively charged oligonucleotides, a self-assembled polyplex micelle formed ( B). For imaging purposes, the polyplex NP was complexed simultaneously with two separate oligonucleotides: custom RNA, 5′ Alexa Fluor 647 and 5′ Alexa Fluor 488, each at a final concentration of 480 nM. The Alexa Fluor 647-tagged RNA was selected for its fluorescent properties in the far-red wavelength range that allowed in vivo imaging of mutant KRAS mice with an IVIS Lumina III In Vivo Optical Imaging System for biodistribution evaluation. The Alexa Fluor 488-tagged RNA was also used in the polyplex micelle so that high resolution localization of the polyplex micelle in early PanIN lesions in the pancreas and other organs could be confirmed ex vivo by co-registration of anti-Alexa Fluor 488 immunohistochemistry and histopathology (i.e., H&E) images.
After self-assembly, the dual tagged polyplex NPs were characterized for their hydrodynamic radius by dynamic light scattering (DLS) using photon correlation spectroscopy (PCS) as described . Each measurement was conducted in triplicate with a laser at a wavelength of 632.8 nm and a scattering angle of 90° for 20 s with the mean size distribution of 91.58 nm for the targeted polyplex micelle ( A). The polydispersity index (PDI) of the NP was ~0.32. 3.2.1. Heparin Displacement Assay A series of in vitro experiments were performed in order to confirm the self-assembly of this micelle NP and that the Alexa Fluor 647- and Alexa Fluor 488-tagged siRNA are co-assembled in the same polyplex. The principal of the heparin displacement assay is that negatively charged heparin is able to displace negatively charged dsRNA from the electrostatic complex with positively charged polylysine, resulting in a release of the complexed dsRNA. When run on a TBE gel, the neutral, intact polyplex with electrostatically complexed RNA remains in the loading wells at the top of the gel. However, upon RNA displacement from the polyplex by heparin, it is free to migrate into the gel and detected as bands. From the heparin displacement assay, it is evident that there is polyplex formation with inclusion of both fluorophore-tagged siRNAs ( B, lane 4). The absence of any RNA band in the polyplex sample without heparin suggests that polyplex is intact and contains no free RNA. When polyplex is disrupted by heparin treatment, it releases both RNAs that are visualized on the gel ( B, lane 5). Heparin and polymer-alone control samples did not stain with SYBR gold staining solution, as expected. Individual RNAs with and without heparin treatment showed the same bands. Note that the Alexa Fluor 488-labeled RNA fluoresces brightly in the UV transilluminator, and both single and double stranded RNA are observed, as well as some contaminating untagged RNA species from the tagged RNA synthesis, as is commonly observed. Based on the results of the heparin displacement assay, as expected, there is no free RNA in the polyplex sample. Therefore, all RNA added to form the polyplex is in the polyplex micelle (i.e., a concentration of 480 nM each). 3.2.2. Confirmation of Dual Fluorophore Labeling In Vitro Fluorescence emission was measured in the IVIS instrument using either the epi-fluorescent 460–520 filter for the measurement of Alexa Fluor 488 or the epi-fluorescent filter of 620–670 for the measurement of Alexa Fluor 647. The fluorescent emission of dual tagged polyplex NP solution imaged in the IVIS instrument demonstrated comparable fluorescent emission curves at increasing volumes when normalized for fluorescence efficiency of the fluorophores ( C). The uptake of the dual tagged polyplex NP was also performed in human PANC-1 pancreatic cancer cells to investigate the uptake of the polyplex micelle into cells and the micelle Alexa Fluor 647/Alexa Fluor 488-tagged siRNA co-assembly, with equal distribution of both Alexa Fluor 647 and Alexa Fluor 488 siRNAs. D reveals the normalized signal data from PANC-1 cells treated with the complexed micelle NP at various cell densities in vitro imaged in the IVIS at the 488 and 647 wavelengths. The intensity of the 488 and 647 fluorescent emission increases with the number of cells, demonstrating equivalent distribution of the Alexa Fluor 647-/Alexa Fluor 488-tagged siRNA in the formulated polyplex. These cells were washed to remove any free polyplex; thus, any fluorescence recorded was from intracellular uptake. These data support that there is equal complexing of the dual siRNA fluorophores in the NP. Furthermore, the normalized signal data demonstrate an equivalent fluorescent emission/cell density response for both fluorophores. 3.2.3. CCK-B Receptor-Targeted Nanoparticles Have Enhanced Uptake in Cells In order to demonstrate that selective targeting of the CCK-BR in pancreatic cancer improves uptake of the polyplex, we examined the fluorescent emission of wild type human PANC-1 cells and PANC-1 cells engineered to over-express the CCK-BR using NPs that were untargeted or targeted and using the epi-fluorescent filter of 620–670 for measurement of Alexa Fluor 647. Treating CCK-BR-expressing wild-type PANC-1 cells with the targeted polyplex micelle enhances uptake in comparison to the untargeted NPs ( E; rows A and B). We show in our previous experiments with the CCR-BR-targeted polyplex that the construct is internalized, based on gene knockdown and confocal microscopy studies with fluorescently tagged polyplex . PANC-1 cells transfected to over-express (OE) the CCK-BR ( E, rows D and E) have a marked increase in the uptake of the targeted polyplex NP compared to the same density of wild-type cells ( E; rows A and B). Targeting the polyplex micelle to CCK-BR enhances uptake of the NPs, as exhibited by the heightened fluorescent emission compared to the over-expressing cells treated with the untargeted NP. The mean ± SEM fluorescence is plotted for replicate treatments in wild-type and in CCK-BR over-expressing cells in F, showing that the untargeted wild-type cells have the least fluorescence, and the targeted over-expressing PANC-1 cells have the greatest fluorescence. Furthermore, the mean fluorescence was significantly increased in the PANC-1 cells treated with the targeted NPs compared to the untargeted NPs ( p < 0.001).
A series of in vitro experiments were performed in order to confirm the self-assembly of this micelle NP and that the Alexa Fluor 647- and Alexa Fluor 488-tagged siRNA are co-assembled in the same polyplex. The principal of the heparin displacement assay is that negatively charged heparin is able to displace negatively charged dsRNA from the electrostatic complex with positively charged polylysine, resulting in a release of the complexed dsRNA. When run on a TBE gel, the neutral, intact polyplex with electrostatically complexed RNA remains in the loading wells at the top of the gel. However, upon RNA displacement from the polyplex by heparin, it is free to migrate into the gel and detected as bands. From the heparin displacement assay, it is evident that there is polyplex formation with inclusion of both fluorophore-tagged siRNAs ( B, lane 4). The absence of any RNA band in the polyplex sample without heparin suggests that polyplex is intact and contains no free RNA. When polyplex is disrupted by heparin treatment, it releases both RNAs that are visualized on the gel ( B, lane 5). Heparin and polymer-alone control samples did not stain with SYBR gold staining solution, as expected. Individual RNAs with and without heparin treatment showed the same bands. Note that the Alexa Fluor 488-labeled RNA fluoresces brightly in the UV transilluminator, and both single and double stranded RNA are observed, as well as some contaminating untagged RNA species from the tagged RNA synthesis, as is commonly observed. Based on the results of the heparin displacement assay, as expected, there is no free RNA in the polyplex sample. Therefore, all RNA added to form the polyplex is in the polyplex micelle (i.e., a concentration of 480 nM each).
Fluorescence emission was measured in the IVIS instrument using either the epi-fluorescent 460–520 filter for the measurement of Alexa Fluor 488 or the epi-fluorescent filter of 620–670 for the measurement of Alexa Fluor 647. The fluorescent emission of dual tagged polyplex NP solution imaged in the IVIS instrument demonstrated comparable fluorescent emission curves at increasing volumes when normalized for fluorescence efficiency of the fluorophores ( C). The uptake of the dual tagged polyplex NP was also performed in human PANC-1 pancreatic cancer cells to investigate the uptake of the polyplex micelle into cells and the micelle Alexa Fluor 647/Alexa Fluor 488-tagged siRNA co-assembly, with equal distribution of both Alexa Fluor 647 and Alexa Fluor 488 siRNAs. D reveals the normalized signal data from PANC-1 cells treated with the complexed micelle NP at various cell densities in vitro imaged in the IVIS at the 488 and 647 wavelengths. The intensity of the 488 and 647 fluorescent emission increases with the number of cells, demonstrating equivalent distribution of the Alexa Fluor 647-/Alexa Fluor 488-tagged siRNA in the formulated polyplex. These cells were washed to remove any free polyplex; thus, any fluorescence recorded was from intracellular uptake. These data support that there is equal complexing of the dual siRNA fluorophores in the NP. Furthermore, the normalized signal data demonstrate an equivalent fluorescent emission/cell density response for both fluorophores.
In order to demonstrate that selective targeting of the CCK-BR in pancreatic cancer improves uptake of the polyplex, we examined the fluorescent emission of wild type human PANC-1 cells and PANC-1 cells engineered to over-express the CCK-BR using NPs that were untargeted or targeted and using the epi-fluorescent filter of 620–670 for measurement of Alexa Fluor 647. Treating CCK-BR-expressing wild-type PANC-1 cells with the targeted polyplex micelle enhances uptake in comparison to the untargeted NPs ( E; rows A and B). We show in our previous experiments with the CCR-BR-targeted polyplex that the construct is internalized, based on gene knockdown and confocal microscopy studies with fluorescently tagged polyplex . PANC-1 cells transfected to over-express (OE) the CCK-BR ( E, rows D and E) have a marked increase in the uptake of the targeted polyplex NP compared to the same density of wild-type cells ( E; rows A and B). Targeting the polyplex micelle to CCK-BR enhances uptake of the NPs, as exhibited by the heightened fluorescent emission compared to the over-expressing cells treated with the untargeted NP. The mean ± SEM fluorescence is plotted for replicate treatments in wild-type and in CCK-BR over-expressing cells in F, showing that the untargeted wild-type cells have the least fluorescence, and the targeted over-expressing PANC-1 cells have the greatest fluorescence. Furthermore, the mean fluorescence was significantly increased in the PANC-1 cells treated with the targeted NPs compared to the untargeted NPs ( p < 0.001).
LSL-Kras G12D/+ ; P48-Cre ( KC ) mice from our genetically engineered animal colony were selected at ages ranging from four months of age, an age when high grade PanIN lesions begin to develop histologically , and up to ten months of age, when most PanIN lesions are typically grade 3 or carcinoma in situ. An initial experiment with sequential imaging over a 24 h period was performed to determine the optimal time for NP uptake in the pancreas after injection. Fluorescence in the pancreas was only visualized in the mice treated with the CCK-BR target-specific fluorescent polyplex NP. Images of a representative 5-month-old anesthetized mouse are shown at baseline and 3, 4, 5, 6, and 24 h post IP injection ( A–F) within the IVIS imaging system, demonstrating fluorescence in the far infrared range consistent with uptake of the Alexa Fluor 647-labeled polyplex localized in the area of the pancreas. The mean ± SEM for age-matched mice is plotted over time ( G), demonstrating that peak fluorescent intensity in the pancreas was reached 5 h after injection and was absent 24 h after injection. Of note, we previously found that 5 h was also the peak NP uptake in mice bearing orthotopic human pancreatic tumors using NPs tagged with the fluorophore Cy3 . Therefore, this 5 h time point was selected to ethically euthanize the mice after treatment and collect the pancreas and other organs to examine ex vivo for fluorescence and histology. With the same gain settings as the images acquired by IVIS in for the targeted construct, a representative 5-month-old KC mouse injected with untargeted NPs shows no specific, high intensity fluorescence in the mouse pancreas at 5 h post dose ( ).
Organs excised at 5 h from an 8-month-old KRAS mouse treated with CCK-BR-targeted polyplex NPs are shown in A. Only the excised pancreas demonstrated positive fluorescence ex vivo in comparison to the other organs ( A). Ex vivo organs harvested from an age-matched KRAS mouse 5 h after being injected with untargeted nanoparticles do not reveal any fluorescence ( B). Ex vivo pancreata excised from 7-month-old KRAS mice are shown side-by-side ( C), demonstrating in another aged-matched group that the pancreas is fluorescent only in the mouse treated with the targeted NPs. Note that as the mouse increases in age (comparing the pancreas of the 8-month-old mouse, A, to that of the 7-month-old mouse, C), the fluorescence in the pancreas increases, correlating with the increased number of PanIN-3 lesions. There was no evidence of fluorescence identified in the pancreas of wild-type control mice injected with the targeted polyplex NP linked with the same fluorescent oligonucleotide probes ( D). A representative mouse pancreas and fluorescent emission are shown for mice aged 5, 6, 7, 8, and 10 months ( E). The intensity after NP injection increases in the ex vivo pancreas with the age of the mouse and corresponds to the increasing grade of PanINs.
In order to confirm that the fluorescent-labeled targeted polyplex NPs accumulated in CCK-BR-expressing pancreatic PanIN lesions and that there was limited off-target uptake in other organs, excised tissues were fixed and paraffin-embedded for immunohistochemistry (IHC) and hematoxylin and eosin (H&E) analysis. Immunochemistry analysis of Alexa Fluor 488 was chosen instead of fluorescence as it allows for the high-resolution correlation of polyplex distribution and H&E histological lesion grade in order to validate CCR-BR as a biomarker/target for early PanIN detection by the targeted polyplex. Fluorescence data, while supportive, would not have been as compelling for this purpose. Tissue sections from the pancreas and other major organs were examined by IHC with a selective rabbit anti-Alexa Fluor 488 antibody and then visualized with Rabbit Polink-1 horseradish peroxidase (HRP) staining. A representative low power image of a 10-month-old mouse pancreas stained with H&E revealed the characteristic pancreatic histology, with advanced PanIN lesions and surrounding fibrosis that occurs during pancreatic carcinogenesis ( A). Confirmation that the CCK-BR-targeted polyplex micelle NPs were distributed to the high grade PanIN lesions was demonstrated by IHC in the pancreatic tissue section stained with a selective Alexa Fluor 488 antibody ( B). Higher magnification of this 10-month-old mouse pancreas stained with H&E shows high grade PanIN-3 lesions ( C). The corresponding tissue shown in D stained with anti-Alexa Fluor 488 shows that the immunoreactivity was more intense in the high grade PanIN-3 lesions, with minimal staining in the earlier stage PanIN-1b and PanIN 2 lesions and an absence of staining in the normal pancreatic acinar cells. H&E stain of a KRAS pancreas from a 5-month-old mouse is shown at low magnification ( E) and at higher magnification ( G). The same tissues from the 5- month-old mice demonstrate positive immunoreactivity for Alexa Fluor 488 localizing to the high grade PanINs in F,H, respectively. Note the lack of staining in the normal pancreatic acinar cells and islet cells of the pancreas, confirming that the NP micelle is selectively targeting the PanIN epithelial cells that express the CCK-BR. This immunoreactivity and the intense fluorescence in the ex vivo pancreas confirm that the fluorescence identified in the living mice 5 h after intraperitoneal injection of the dual Alexa Fluor 488-/647-labeled polyplex NP was indeed the visualization of polyplex within the PanIN lesions of the mouse pancreas. These data are proof of principle that the CCK-BR-targeted NPs reach the mouse pancreas and are taken up into the abnormal precancerous epithelium 5 h after injection.
Localization of the CCK-BR-targeted polyplex NP in the pancreatic PanINs of mice by immunoreactivity to polyplex components was further confirmed with an antibody to polyethylene glycol (PEG). Similar to the immunohistochemical localization with the Alexa Fluor 488 antibody above, the PEG antibody showed increased staining in the mouse pancreas high grade PanIN lesions ( A,B). The normal pancreatic acinar cells that lack the CCK-BR did not exhibit any immunoreactivity to the PEG antibody ( B). Age-matched KC mice injected with untargeted polyplex NP and probed with the same PEG antibody did not reveal any immunoreactivity in the pancreas PanIN lesions ( C,D). Mouse pancreas from the KC mice that reacted with a CCK-BR selective antibody demonstrated CCK-BR expression in the epithelial cells of the high grade PanIN lesions ( E,F). These findings are confirmatory evidence that the fluorescence observed in the anesthetized mice by the IVIS imaging system upon targeted polyplex NP injection was indeed due to uptake of the NP in the CCK-BR-expressing precancerous PanIN lesions of the mouse pancreas. Furthermore, these findings support the specificity of the CCK-BR-targeted NP for PanIN tissue uptake and localization and the lack of specificity with the untargeted NP.
With the intention of confirming PanIN specificity and limited off-target uptake, other organs were also examined histologically and by anti-Alexa Fluor 488 and confirmed with anti-PEG immunohistochemical analysis. We were selective in the tissues that we analyzed for immunohistochemistry (IHC), and chose tissues based on ex vivo florescence or likelihood of polyplex accumulation based on the CCK expression pattern or prior knowledge of polyplex distribution. The colon and lung were not selected for both Alexa Fluor 488 and PEG IHC, since these tissues did not demonstrate an ex vivo fluorescent signal, do not have CCK receptors, and historically do not accumulate polyplex. However, we performed Alexa Fluor 488 IHC on lung and PEG IHC on colon to rule out potential polyplex accumulation. As expected, the findings were negative in these tissues. The immunohistochemistry of these other organ tissue sections with the Alexa Fluor 488 antibody was negative but showed some background and positive staining in the kidney tubule ( A). Immunoreactivity of the excised organs probed with an antibody for PEG ( B) lacks any staining, and no immunoreactivity was seen in the kidney. This difference in the immunohistochemistry images between the two antibodies suggests that the positive staining identified in the tissues probed with the Alexa Fluor 488 antibody represents staining from the free Alexa Fluor 488, such as after polyplex uptake, degradation, and excretion by the kidney. Slides from the Alexa Fluor 488 immunohistochemistry from three separate mouse pancreata were digitized with an Aperio ScanScope XT (Leica) at 200× in a single z-plane. Cell detection algorithms were run, and the staining intensity was scored using a scale of 0–3 as follows: 0 for no staining, 1 for mild staining, 2 for moderate staining, and 3 for strong staining. The percentage of cells staining positively is reported, and an H-score, which integrates percent positive and staining intensity, was calculated using QuPath as follows: H-score = [1 × (% cells 1+) + 2 × (% cells 2+) + 3 × (% cells 3+ )] H-score range = 0–300. The results of the H-score are shown in . These data support the negative ex vivo fluorescence and lack of NP uptake in the other organs visualized in the IVIS. Negative controls were run for each tissue by replacing the primary anti-PEG antibody with a nonspecific antibody reagent from the same host species and isotype (isotype control). Staining was considered specific when there was staining with the primary anti-PEG antibody and no staining in the isotype control or control untreated tissue. The positive and negative anti-PEG-stained control tissues are shown in . The pancreata from control C57BL/6 wild-type mice treated with CCK-BR-targeted NPs were negative for Alexa Fluor 488 immunoreactivity ( ).
In order to show the clinical relevance and translational potential for use in human subjects, we examined human pancreas tissue using a commercial tissue microarray containing normal pancreas tissues and specimens with PanINs of various grades. This tissue array was stained with the CCK-B receptor antibody (as above in E,F), and we demonstrated the absence of CCK-BR immunoreactivity in normal human pancreas ( A, top left) and the presence of CCK-BR immunoreactivity in human PanINs with increasing grade, including PanIN-1, PanIN-2, and PanIN-3 lesions ( A). Immunoreactivity for the CCK-BR is negligible in the normal human pancreas, and the staining increases with increasing PanIN grade. Analysis of the CCK-BR immunoreactivity integrated density analyzed by computer software (ImageJ) is plotted for normal human pancreas tissue and increasing PanIN grade ( B). Integrated density is the sum of the values of the pixels in the image or selection. This is equivalent to the product of area and mean gray value.
The current recommendations today for pancreatic cancer screening include high-risk individuals with a genetic predisposition or family history of pancreatic cancer . Although endoscopic ultrasound and MRI are used to monitor those subjects with cystic lesions of the pancreas , the overwhelming majority of cancers develop from microscopic PanIN lesions that will require more sensitive imaging tools for detection. Although surgical resection offers a potential chance for a cure for pancreatic cancer if detected early , over 90% of the subjects have advanced disease at the time of presentation due the absence of sensitive imaging tests and biomarkers . In the present study, we demonstrate the detection of early precancerous lesions (PanINs) in genetically engineered mice. These lesions are not seen by routine radiographic imaging such as MRI, PET, or CT scans. Notably, there is no control PanIN imaging agent available at this time to compare our technology to, which emphasizes the concept’s novelty and utility. In the current investigation, we utilized fluorophores for imaging in a murine model for proof of principle. However, more highly sensitive compounds have been used to enhance imaging in humans such as Fluorine-18 , a radiopharmaceutical tracer used for PET (positron emission tomography) imaging, or Technetium-99m (99mTc), used for CT-SPECT imaging . Employing the CCK-BR as a specific target for early detection of precancerous lesions in the pancreas combined with compounds for imaging may increase the number of subjects identified with surgically resectable lesions. Furthermore, if the CCK-BR-targeted NP can also deliver gastrin siRNA or other payloads to the PanIN lesions such as siRNA for mutant KRAS, these NPs would have the potential not only to identify but also to also treat these PanIN lesions to decrease proliferation and halt the progression to cancer.
Georgetown University and the NIH hold intellectual property for this work.
|
A dermatology E-learning programme is perceived as a valuable learning tool in postgraduate general practice training | e8e6fb5e-1c8e-40ac-af47-1b7a1e03137d | 8994645 | Family Medicine[mh] | In medical schools and residency training programs, dermatology training is limited, leading to both knowledge gaps in dermatological pathology as well as low confidence in the performance of skin examinations and management of dermatological conditions. Previous studies have shown that the dermatological diagnostic ability of General Practitioners (GPs) is suboptimal. - Cutaneous disorders form a significant percentage of the GPs workload (15% of the GP consultations a day). , Consequently, adequate diagnosis and treatment of dermatological conditions by GPs is essential to optimize patient referrals to dermatologists, prevent misdiagnoses and their impact on patient health, as well as to increase trust and satisfaction among patients in the competency of their GPs. , Therefore, it is important to improve the dermatological background and experiences of future GPs by providing appropriate dermatological training during their residency. , However, resident shifts and work-hour restrictions typically interfere with daily teaching or lecturing. Also, the ongoing changing context of medical education demands a more active, self-steering attitude from students over time. Thus, other formats of teaching, like E-learning programmes, should be explored to establish effective learning. E-learning or online learning is defined as 'any educational intervention mediated electronically via the Internet'. A growing body of literature recognizes the importance of E-learning in medical education. - In comparison to traditional teaching methods (lectures, teacher-led discussions, and group work assignments), E-learning methods use a format that is available and comparable for all users. , Recent studies that have evaluated the effect of E-learning formats in dermatology have shown that students valued its visual and interactive aspects. , , - Thereby, an E-learning programme in combination with traditional teaching methods resulted in improved retention of knowledge regarding dermatological topics. Moreover, Fransen and colleagues reported a positive effect of E-learning programmes on acquiring dermatology knowledge of undergraduate medical students. Students appreciated the visual images, multiple-choice questions and feedback on the answers, which facilitated the recognition of dermatological conditions. Nonetheless, there are limited insights into the effect of E-learning in workplace-based postgraduate education. As such, less is known about the determinants and frequency of E-learning utilization in postgraduate medical education. The aforementioned lack of dermatological knowledge, the variable working shifts, the different learning context (students versus residents), and fewer insights on residents' learning effect indicates a need to better understand the GP residents' perceptions of a dermatology E-learning and how it affects their learning processes. Furthermore, there is little known about clinical teachers' perceptions on embedding E-learning programmes in the educational programme. - Students or postgraduates and clinical teachers are educational partners, and their relationship determines the better understanding of contents, opportunities to learn with peers and the interaction within the group. Therefore, it is required to achieve a better understanding on how teachers respond to E-learning programmes and on their acceptance. The present study aims to determine GP residents' perceptions of the learning effect of a dermatology E-learning programme. Furthermore, we aim to determine the clinical teachers' perceptions on embedding and using the E-learning programmes in the traditional teaching methods for GP residents. The following research questions were studied: (1) what is the effect of a dermatology E-learning programme on the acquisition of GP residents' dermatological knowledge? (2) what are GP residents' perceptions on the learning effect of a dermatology E-learning programme? and (3) what are clinical teachers' perceptions on embedding and use of an E-learning programme in dermatology in the traditional teaching methods for GP residents?
Design, setting and participants The study took place in the period from May 2019-August 2019 and used a mixed-method design ( ) with a convergent parallel collection of data in order to create a synergistic understanding, including qualitative data (individual semi-structured interviews) and quantitative data (results of pre-and post-intervention knowledge tests.) , Participants were first-year GP residents and clinical teachers at the GP Specialty Training programme of Maastricht University, the Netherlands. The residency programme consists of three years, in which residents participate in weekly education days organized by the GP Specialty Training Programme. The content of these days includes lectures, case-based lectures and group work about different fields of medicine. GP residents (n=21) from the spring 2019 cohort were asked to participate in the study in the first educational meeting. After consent, residents (n=21) were randomized into an intervention group and a control group: (1) GP residents who were not participating in the traditional teaching methods but did have access to and were participating in the E-learning programme (n=12) and (2) GP residents who were participating in the traditional teaching methods but did not have access to and were not participating in the E-learning programme (n=9). For the interviews, eleven GP residents gave consent, six residents of the E-learning programme group and five residents of the traditional teaching group. The traditional teaching methods consisted of two scheduled educations sessions (180 minutes) addressing dermatological topics provided by clinical teachers from the GP Specialty Training. The online dermatology E-learning programme, Education in Dermatology (ED), is developed by dermatologists and is easily accessible from any desktop computer, laptop, and smartphone with an internet connection. The programme consisted of 31 clinical cases about cutaneous problems. The cases contained images and multiple-choice questions regarding descriptions, diagnosis and management of cutaneous problems. Answers and feedback were provided with examples of important visual features necessary to evaluate skin disorders. In addition, web-based links to learning materials were provided within the E-learning programme. Clinical teachers (n=5) spending more than 6 hours per week teaching were approached via e-mail or in person. Four teachers with access to the E-learning programme and one teacher with no access to the E-learning programme participated in the interviews. The Ethical Review Board (ERB) of the Netherlands Associations for Medical Education (NVMO) approved the procedures of this study. . Study design and flowchart of study participants The figure provides information on the study design and study participants (GP residents). Twenty-one first year GP residents were divided into two groups (control group and intervention group). After two knowledge tests, semi-structured interviews were conducted with GP residents' and clinical teachers' to explore perception about the E-learning programme. In the decision-making procedures, the ERB applies guidelines based on ethical principles from existing frameworks and codes of conduct (e.g., the Declaration of Helsinki, last revised in 2013). Participating trainees and clinical teachers gave written informed consent. All data were anonymized with codes. Data collection Quantitative data In order to identify the effect of the E-learning programme on knowledge acquisition, the residents completed a pre-and post-knowledge test, i.e., before and after participating in the traditional teaching method or the E-learning programme. Dermatologists of Maastricht University Medical Centre+ (MUMC+) developed the pre-and post-knowledge tests. Each test contained 45 multiple-choice questions regarding diagnosis, management and treatment of common dermatological conditions. The tests mainly focused on different levels of learning: knowledge, application and thinking/problem-solving ability. The questions are part of an existing validated question bank used for summative assessment during clerkships at MUMC+. To ensure validity and reliability, all questions were critically reviewed by a dermatologist and a course instructor (HM, SM) before using these in the pre-and post- knowledge tests. Moreover, internal consistency was investigated by calculating Cronbach's alpha. Qualitative data Semi-structured individual interviews of approximately 60 minutes with GP residents and clinical teachers were conducted by the first researcher (MV) after the post-knowledge test took place. The interview guides (Appendix A and Appendix B) contained open-ended questions probing for expectations, perceptions, personal experiences and learning activities by using the E-learning programme or traditional teaching method. The interviews were audio-recorded, transcribed verbatim and analyzed using template analysis. The audio recordings were deleted after the transcription process. The results will be presented through summaries and quotes. Data analysis Quantitative data All data are expressed as means with corresponding standard deviation (SD) unless indicated otherwise. The pre-and post-intervention knowledge tests of the intervention group were compared using paired-samples t-tests. Post-knowledge tests scores of the intervention and the control group were compared using independent t-tests. Statistical significance was set at p<0.05. Effect sizes (Cohen's d) with corresponding 95% confidence intervals were calculated for the quantitative comparison between the two groups. Cronbach's coefficient α was used to calculate the internal consistency of the questions used in the knowledge tests. A Cronbach's alpha between ≥0.70 and ≤0.95 was classified as good. All analyses were performed using the Statistical Package for Social Sciences (SPSS version 24). Qualitative data The analysis of the transcripts was independently done by MV and a second researcher (SH) using template analysis. Template analysis were performed using Atlas.ti software (version 8.0). The interviews continued until thematic saturation was reached. The thematic saturation was determined by the research team following these criteria: (1) if new data could be fitted in categories that were already devised, (2) if no new insights, themes, issues or counter-example/cases arose, and (3) consensus within the research team was reached about the notion of saturation with the collected and analyzed data. Analysis of interviews 1-5 with the GP residents of the intervention group was labelled, coded by MV, and crosschecked by SH. The outcomes were compared, and differences were discussed until consensus was reached, which resulted in an initial template used in interviews 6-9 (four residents of the E-learning programme group and one resident of the control group). As coding proceeded, constant comparison defined the characteristics of each category and resulted in an adapted initial template, which was used for the interviews with the clinical teachers. Finally, by examining and re-examining the data from the intervention, the control group, as well as the clinical teachers' group, the relationships among the major categories were explored, and no new insights were obtained. At this point, thematic saturation was reached.
The study took place in the period from May 2019-August 2019 and used a mixed-method design ( ) with a convergent parallel collection of data in order to create a synergistic understanding, including qualitative data (individual semi-structured interviews) and quantitative data (results of pre-and post-intervention knowledge tests.) , Participants were first-year GP residents and clinical teachers at the GP Specialty Training programme of Maastricht University, the Netherlands. The residency programme consists of three years, in which residents participate in weekly education days organized by the GP Specialty Training Programme. The content of these days includes lectures, case-based lectures and group work about different fields of medicine. GP residents (n=21) from the spring 2019 cohort were asked to participate in the study in the first educational meeting. After consent, residents (n=21) were randomized into an intervention group and a control group: (1) GP residents who were not participating in the traditional teaching methods but did have access to and were participating in the E-learning programme (n=12) and (2) GP residents who were participating in the traditional teaching methods but did not have access to and were not participating in the E-learning programme (n=9). For the interviews, eleven GP residents gave consent, six residents of the E-learning programme group and five residents of the traditional teaching group. The traditional teaching methods consisted of two scheduled educations sessions (180 minutes) addressing dermatological topics provided by clinical teachers from the GP Specialty Training. The online dermatology E-learning programme, Education in Dermatology (ED), is developed by dermatologists and is easily accessible from any desktop computer, laptop, and smartphone with an internet connection. The programme consisted of 31 clinical cases about cutaneous problems. The cases contained images and multiple-choice questions regarding descriptions, diagnosis and management of cutaneous problems. Answers and feedback were provided with examples of important visual features necessary to evaluate skin disorders. In addition, web-based links to learning materials were provided within the E-learning programme. Clinical teachers (n=5) spending more than 6 hours per week teaching were approached via e-mail or in person. Four teachers with access to the E-learning programme and one teacher with no access to the E-learning programme participated in the interviews. The Ethical Review Board (ERB) of the Netherlands Associations for Medical Education (NVMO) approved the procedures of this study. . Study design and flowchart of study participants The figure provides information on the study design and study participants (GP residents). Twenty-one first year GP residents were divided into two groups (control group and intervention group). After two knowledge tests, semi-structured interviews were conducted with GP residents' and clinical teachers' to explore perception about the E-learning programme. In the decision-making procedures, the ERB applies guidelines based on ethical principles from existing frameworks and codes of conduct (e.g., the Declaration of Helsinki, last revised in 2013). Participating trainees and clinical teachers gave written informed consent. All data were anonymized with codes.
Quantitative data In order to identify the effect of the E-learning programme on knowledge acquisition, the residents completed a pre-and post-knowledge test, i.e., before and after participating in the traditional teaching method or the E-learning programme. Dermatologists of Maastricht University Medical Centre+ (MUMC+) developed the pre-and post-knowledge tests. Each test contained 45 multiple-choice questions regarding diagnosis, management and treatment of common dermatological conditions. The tests mainly focused on different levels of learning: knowledge, application and thinking/problem-solving ability. The questions are part of an existing validated question bank used for summative assessment during clerkships at MUMC+. To ensure validity and reliability, all questions were critically reviewed by a dermatologist and a course instructor (HM, SM) before using these in the pre-and post- knowledge tests. Moreover, internal consistency was investigated by calculating Cronbach's alpha. Qualitative data Semi-structured individual interviews of approximately 60 minutes with GP residents and clinical teachers were conducted by the first researcher (MV) after the post-knowledge test took place. The interview guides (Appendix A and Appendix B) contained open-ended questions probing for expectations, perceptions, personal experiences and learning activities by using the E-learning programme or traditional teaching method. The interviews were audio-recorded, transcribed verbatim and analyzed using template analysis. The audio recordings were deleted after the transcription process. The results will be presented through summaries and quotes.
In order to identify the effect of the E-learning programme on knowledge acquisition, the residents completed a pre-and post-knowledge test, i.e., before and after participating in the traditional teaching method or the E-learning programme. Dermatologists of Maastricht University Medical Centre+ (MUMC+) developed the pre-and post-knowledge tests. Each test contained 45 multiple-choice questions regarding diagnosis, management and treatment of common dermatological conditions. The tests mainly focused on different levels of learning: knowledge, application and thinking/problem-solving ability. The questions are part of an existing validated question bank used for summative assessment during clerkships at MUMC+. To ensure validity and reliability, all questions were critically reviewed by a dermatologist and a course instructor (HM, SM) before using these in the pre-and post- knowledge tests. Moreover, internal consistency was investigated by calculating Cronbach's alpha.
Semi-structured individual interviews of approximately 60 minutes with GP residents and clinical teachers were conducted by the first researcher (MV) after the post-knowledge test took place. The interview guides (Appendix A and Appendix B) contained open-ended questions probing for expectations, perceptions, personal experiences and learning activities by using the E-learning programme or traditional teaching method. The interviews were audio-recorded, transcribed verbatim and analyzed using template analysis. The audio recordings were deleted after the transcription process. The results will be presented through summaries and quotes.
Quantitative data All data are expressed as means with corresponding standard deviation (SD) unless indicated otherwise. The pre-and post-intervention knowledge tests of the intervention group were compared using paired-samples t-tests. Post-knowledge tests scores of the intervention and the control group were compared using independent t-tests. Statistical significance was set at p<0.05. Effect sizes (Cohen's d) with corresponding 95% confidence intervals were calculated for the quantitative comparison between the two groups. Cronbach's coefficient α was used to calculate the internal consistency of the questions used in the knowledge tests. A Cronbach's alpha between ≥0.70 and ≤0.95 was classified as good. All analyses were performed using the Statistical Package for Social Sciences (SPSS version 24). Qualitative data The analysis of the transcripts was independently done by MV and a second researcher (SH) using template analysis. Template analysis were performed using Atlas.ti software (version 8.0). The interviews continued until thematic saturation was reached. The thematic saturation was determined by the research team following these criteria: (1) if new data could be fitted in categories that were already devised, (2) if no new insights, themes, issues or counter-example/cases arose, and (3) consensus within the research team was reached about the notion of saturation with the collected and analyzed data. Analysis of interviews 1-5 with the GP residents of the intervention group was labelled, coded by MV, and crosschecked by SH. The outcomes were compared, and differences were discussed until consensus was reached, which resulted in an initial template used in interviews 6-9 (four residents of the E-learning programme group and one resident of the control group). As coding proceeded, constant comparison defined the characteristics of each category and resulted in an adapted initial template, which was used for the interviews with the clinical teachers. Finally, by examining and re-examining the data from the intervention, the control group, as well as the clinical teachers' group, the relationships among the major categories were explored, and no new insights were obtained. At this point, thematic saturation was reached.
All data are expressed as means with corresponding standard deviation (SD) unless indicated otherwise. The pre-and post-intervention knowledge tests of the intervention group were compared using paired-samples t-tests. Post-knowledge tests scores of the intervention and the control group were compared using independent t-tests. Statistical significance was set at p<0.05. Effect sizes (Cohen's d) with corresponding 95% confidence intervals were calculated for the quantitative comparison between the two groups. Cronbach's coefficient α was used to calculate the internal consistency of the questions used in the knowledge tests. A Cronbach's alpha between ≥0.70 and ≤0.95 was classified as good. All analyses were performed using the Statistical Package for Social Sciences (SPSS version 24).
The analysis of the transcripts was independently done by MV and a second researcher (SH) using template analysis. Template analysis were performed using Atlas.ti software (version 8.0). The interviews continued until thematic saturation was reached. The thematic saturation was determined by the research team following these criteria: (1) if new data could be fitted in categories that were already devised, (2) if no new insights, themes, issues or counter-example/cases arose, and (3) consensus within the research team was reached about the notion of saturation with the collected and analyzed data. Analysis of interviews 1-5 with the GP residents of the intervention group was labelled, coded by MV, and crosschecked by SH. The outcomes were compared, and differences were discussed until consensus was reached, which resulted in an initial template used in interviews 6-9 (four residents of the E-learning programme group and one resident of the control group). As coding proceeded, constant comparison defined the characteristics of each category and resulted in an adapted initial template, which was used for the interviews with the clinical teachers. Finally, by examining and re-examining the data from the intervention, the control group, as well as the clinical teachers' group, the relationships among the major categories were explored, and no new insights were obtained. At this point, thematic saturation was reached.
Quantitative data In total, 21 GP residents were included, and all subjects (9 residents of the control group and 12 residents of the intervention group) completed the six-week E-learning programme or the two education sessions as part of the traditional teaching methods. No drop-out was seen. For the pre-knowledge test, no statistical analysis could be performed because of missing data in the control group due to technical problems with the E-learning programme (data was not saved). The pre-knowledge test consisted of 46 items (Alpha=0.78), and the post-knowledge tests consisted of 45 items (Alpha=0.90). The intervention group showed a significant increase in knowledge test scores from the pre- (M=58.92%, SD=9.55%) to the post-knowledge test (M=64.92%, SD=13.65%, t ( 11) = 2.258, p = 0.045, Cohen’s d = 0.51), suggesting that the E-learning intervention moderately benefitted the knowledge acquisition of GP residents. There was no significant difference in post-knowledge test scores of the control (M=66.38%, SD=15.78%) and intervention group (M=64.92%, SD=13.65%, t ( 18) =0.351, p=0.730, Cohen’s d=0.10). Qualitative data In the following paragraphs, we will explore the different primary themes and provide clarifying quotes. Format The content provided by the E-learning programme was considered to be uniform and set a basic level for everyone. GP residents indicated to be easily overwhelmed by the many textbooks that are available for traditional education and often did not know where to start. The E-learning programme provided a starting point for their learning. “It [E-learning programme] is a good way to acquire knowledge. I find it less trouble than random opening a book or an NHG-standard [Dutch protocols designed especially for GPs], and not knowing where to start. The knowledge you acquire from books or NHG-standards does not lasts and at a certain moment, you have read it all.” - GP resident (intervention group - interview 6) Residents participating in the control group indicated a lack of uniformity in the selected clinical cases and had the perception that the learning effect of the weekly education sessions were mainly determined by the quality of the teacher, the quality of the group, and/or the quality of the selected cases. However, the collaborative approach, mainly the dialogue and discussion regarding clinical cases, was positively perceived and lead to retention of main messages. Moreover, involving GP residents in real-world clinical cases and linking new information to prior knowledge required effective communication and collaboration among clinical teachers, GP residents, and others. GP residents from the intervention group did not miss this collaborative approach. Nonetheless, it was stated that the dialogue and discussion within the traditional teaching method (in other education sessions they attended) was appreciated. However, it was also noted that the interactivity of these education sessions strongly depended on the skills of the clinical teacher. “The group discussion stimulates you actively to work and think about the problem. Not only listening, also actively taking part in the discussion, instead of passive listening, for me, that is the same as passively reading a book. The important thing is that you become activated. Thereby, you are being forced to think for yourself and be able to explain your thoughts to the group. Eventually, you can easier understand why you are giving that specific answer to the group.” - GP resident (control group - interview 5) Clinical teachers suggested to establish a clearer structure/framework for the E-learning programme to allow better understanding of all the different dermatological conditions (e.g., by classifying groups of dermatological conditions), instead of the offered more fragmented dermatological cases. Therefore, GP residents could possibly miss links between theory (learning about dermatological conditions) and practice (recognizing and treating a variety of dermatological conditions in clinical practice). During the education sessions, clinical teachers described that these links between theory and practice would be made easier via dialogue and discussion and thereby help GP residents on embedding dermatological knowledge. A few barriers of the E-learning programme were related to technical issues, e.g., the slowness' of the programme. Agency GP residents used the E-learning programme autonomously and in their own phase and time (e.g., during clinic hours when a patient dropped out, during a free afternoon, or at home). They indicated that autonomous learning by using the E-learning programme enabled them to find more additional information about the cases by using the offered links and references from the E-learning programme. The references and the links helped the GP residents to review wrong answers. Moreover, they could freely choose to use the links for additional information, and they became acquainted with other materials. Therefore, they not only studied and learned independently, they also became more self-managing and easily used the links to the websites to find out more and study the mechanisms of disease. “The fact that you can use it [E-learning programme] in between two patients; if a patient drops out and you have more time left. Furthermore, I can just open and use the E-learning programme for a couple of minutes in between work.” – GP resident (intervention group - interview 1) The E-learning programme push-notes maintained the regular use of the E-learning programme and offered instant stimulation to the GP residents to learn. They appreciated this stimulation, as the amount of time they must spend at the clinical workplace is substantial with little time left for actual study, and therefore to schedule time for study was easily forgotten. GP residents in the control group indicated that recall of the acquired dermatological knowledge was not easy given due to the limited teaching hours and no stimulation to revisit teaching material. Thereby, little time was left during the traditional teaching methods for GP residents to individually acquire more information about specific dermatological conditions, which was not discussed during the education sessions. Furthermore, there was no direct-follow up possible and no time for GP residents to study and learn in their own phase and time. “In a group [traditional teaching methods], you cannot find more information about a topic you forgot about, for example: Which cream? Which steroid class? That is not something you are going to look up for yourselves in an education session. However, if you work on an E-learning programme at home, you are able to immediately choose yourselves to find more information about it. Thereby, you can determine for yourselves on what topic you have to find more information, and thereby you can easier adjust it to yourselves and to your knowledge.” – GP resident (control group - interview 5) Exposure to cases The high-resolution images in the E-learning programme allowed the GP residents to gain a deeper understanding of the range of clinical presentations and provide more exposure to dermatological conditions. In addition, GP residents were able to identify their own knowledge gaps (e.g., different kinds of therapies for dermatological conditions). GP residents valued learning through clinical cases, which they also recognized in clinical practice. In addition, they appreciated focusing on a selection of dermatological topics instead of being overwhelmed by many comprehensive textbooks (related to theme 'Format' ). GP residents from the intervention group as well as the control group experienced a lack of basic dermatological knowledge and preferred more exposure to dermatological education. “I am lacking a bit of knowledge, knowledge concerning dermatological conditions that are often seen in general practice.” – GP resident (control group - interview 5) The selected cases from the E-learning programme contained common dermatological conditions, rare dermatological conditions, and life-threatening dermatological conditions. Clinical teachers indicated that the selected cases and the associated multiple-choice questions from the E-learning programme were encountered in the daily clinical practice, and therefore enabled the GP residents to acquaint a good balance in dermatological knowledge for various items of conditions. “My opinion about the content [E-learning programme] was that it did not consist of any rare conditions. The cases [dermatological conditions] are commonly encountered in the general practice. Those are relevant cases that you will actually see in general practice.” – Clinical teacher (access E-learning programme - interview 4) In contrast to the selected cases from the education sessions, the GP residents themselves determined the input of the cases. Therefore, it is possible that a rare, or life-threatening, or even a common dermatological condition could be missed. Link with practice GP residents indicated that exposure to dermatological cases in practice was a valuable learning experience. The recognition of clinical cases from the E-learning programme in practice was perceived as helpful, offered repetition, and confirmed their dermatological knowledge. It also enabled the use of that specific case to optimize their consultation and to consolidate their knowledge. “I am an active student, so I have to see something, I have to do something, and from that experience, I learn something, thus, this [E-learning programme] offers me a perfect solution. I would rather see it than that I have to take a book and read it. Thus, I prefer the situation [E-learning programme]; seeing things, checking, getting feedback, and more, practicing and recognizing.” – GP resident (intervention group - interview 2) GP residents in the control group elaborated on prior experiences with other E-learning programmes and prefer that E-learning programmes would provide authentically clinical cases related to daily clinical practice. This link between theory and practice was present during GP residents' education sessions and was valued. In these education sessions, GP residents met in a group and worked on several cases of a patient with a dermatological condition. The GP residents themselves have chosen these clinical cases from their own practice. However, in a number of selected cases, the diagnosis was not certain. GP residents felt uncertain and insecure about the possibility of missing diagnoses. Clinical teachers noted that especially the first year GP residents were looking for certainties, and missing information in the education session or in practice can be led to insecurity about their clinical eye. “It is common in the group first year GP residents are very eager not to miss any lessons, not all of them, but it is a repeating theme that plays a role by all of them, by some more than the others, but it still remains a repeating theme that comes back in the first year and which is often mentioned.” – Clinical teacher (access E-learning programme - interview 5) Within the E-learning programme, this insecurity was not present as feedback was provided via the programme to the GP residents. GP residents noted that by receiving feedback from the E-learning programme led to a deeper understanding of the different dermatological conditions. Clinical teachers stated that instantly triggering GP residents with questions and clinical cases combined with a self-chosen time and medium fits the GP resident of today perfectly. Thereby, linking the digital learning environment of the E-learning programme to the traditional teaching methods. “I think, to my opinion, that it [E-learning programme] fits the current GP resident perfectly. From my own experience, I see more often GP residents, who appreciate it when learning is interactive, in the format of a quiz, something they can actively participate to, as long as they are getting entertained. I think they value that the most, and to my opinion, I got the idea, that, the more serious learning like spending hours learning from a book, is, how do I have to put it, is something, that through the years has become less sexy.” – Clinical teacher (access E-learning programme - interview 4)
In total, 21 GP residents were included, and all subjects (9 residents of the control group and 12 residents of the intervention group) completed the six-week E-learning programme or the two education sessions as part of the traditional teaching methods. No drop-out was seen. For the pre-knowledge test, no statistical analysis could be performed because of missing data in the control group due to technical problems with the E-learning programme (data was not saved). The pre-knowledge test consisted of 46 items (Alpha=0.78), and the post-knowledge tests consisted of 45 items (Alpha=0.90). The intervention group showed a significant increase in knowledge test scores from the pre- (M=58.92%, SD=9.55%) to the post-knowledge test (M=64.92%, SD=13.65%, t ( 11) = 2.258, p = 0.045, Cohen’s d = 0.51), suggesting that the E-learning intervention moderately benefitted the knowledge acquisition of GP residents. There was no significant difference in post-knowledge test scores of the control (M=66.38%, SD=15.78%) and intervention group (M=64.92%, SD=13.65%, t ( 18) =0.351, p=0.730, Cohen’s d=0.10).
In the following paragraphs, we will explore the different primary themes and provide clarifying quotes.
The content provided by the E-learning programme was considered to be uniform and set a basic level for everyone. GP residents indicated to be easily overwhelmed by the many textbooks that are available for traditional education and often did not know where to start. The E-learning programme provided a starting point for their learning. “It [E-learning programme] is a good way to acquire knowledge. I find it less trouble than random opening a book or an NHG-standard [Dutch protocols designed especially for GPs], and not knowing where to start. The knowledge you acquire from books or NHG-standards does not lasts and at a certain moment, you have read it all.” - GP resident (intervention group - interview 6) Residents participating in the control group indicated a lack of uniformity in the selected clinical cases and had the perception that the learning effect of the weekly education sessions were mainly determined by the quality of the teacher, the quality of the group, and/or the quality of the selected cases. However, the collaborative approach, mainly the dialogue and discussion regarding clinical cases, was positively perceived and lead to retention of main messages. Moreover, involving GP residents in real-world clinical cases and linking new information to prior knowledge required effective communication and collaboration among clinical teachers, GP residents, and others. GP residents from the intervention group did not miss this collaborative approach. Nonetheless, it was stated that the dialogue and discussion within the traditional teaching method (in other education sessions they attended) was appreciated. However, it was also noted that the interactivity of these education sessions strongly depended on the skills of the clinical teacher. “The group discussion stimulates you actively to work and think about the problem. Not only listening, also actively taking part in the discussion, instead of passive listening, for me, that is the same as passively reading a book. The important thing is that you become activated. Thereby, you are being forced to think for yourself and be able to explain your thoughts to the group. Eventually, you can easier understand why you are giving that specific answer to the group.” - GP resident (control group - interview 5) Clinical teachers suggested to establish a clearer structure/framework for the E-learning programme to allow better understanding of all the different dermatological conditions (e.g., by classifying groups of dermatological conditions), instead of the offered more fragmented dermatological cases. Therefore, GP residents could possibly miss links between theory (learning about dermatological conditions) and practice (recognizing and treating a variety of dermatological conditions in clinical practice). During the education sessions, clinical teachers described that these links between theory and practice would be made easier via dialogue and discussion and thereby help GP residents on embedding dermatological knowledge. A few barriers of the E-learning programme were related to technical issues, e.g., the slowness' of the programme.
GP residents used the E-learning programme autonomously and in their own phase and time (e.g., during clinic hours when a patient dropped out, during a free afternoon, or at home). They indicated that autonomous learning by using the E-learning programme enabled them to find more additional information about the cases by using the offered links and references from the E-learning programme. The references and the links helped the GP residents to review wrong answers. Moreover, they could freely choose to use the links for additional information, and they became acquainted with other materials. Therefore, they not only studied and learned independently, they also became more self-managing and easily used the links to the websites to find out more and study the mechanisms of disease. “The fact that you can use it [E-learning programme] in between two patients; if a patient drops out and you have more time left. Furthermore, I can just open and use the E-learning programme for a couple of minutes in between work.” – GP resident (intervention group - interview 1) The E-learning programme push-notes maintained the regular use of the E-learning programme and offered instant stimulation to the GP residents to learn. They appreciated this stimulation, as the amount of time they must spend at the clinical workplace is substantial with little time left for actual study, and therefore to schedule time for study was easily forgotten. GP residents in the control group indicated that recall of the acquired dermatological knowledge was not easy given due to the limited teaching hours and no stimulation to revisit teaching material. Thereby, little time was left during the traditional teaching methods for GP residents to individually acquire more information about specific dermatological conditions, which was not discussed during the education sessions. Furthermore, there was no direct-follow up possible and no time for GP residents to study and learn in their own phase and time. “In a group [traditional teaching methods], you cannot find more information about a topic you forgot about, for example: Which cream? Which steroid class? That is not something you are going to look up for yourselves in an education session. However, if you work on an E-learning programme at home, you are able to immediately choose yourselves to find more information about it. Thereby, you can determine for yourselves on what topic you have to find more information, and thereby you can easier adjust it to yourselves and to your knowledge.” – GP resident (control group - interview 5)
The high-resolution images in the E-learning programme allowed the GP residents to gain a deeper understanding of the range of clinical presentations and provide more exposure to dermatological conditions. In addition, GP residents were able to identify their own knowledge gaps (e.g., different kinds of therapies for dermatological conditions). GP residents valued learning through clinical cases, which they also recognized in clinical practice. In addition, they appreciated focusing on a selection of dermatological topics instead of being overwhelmed by many comprehensive textbooks (related to theme 'Format' ). GP residents from the intervention group as well as the control group experienced a lack of basic dermatological knowledge and preferred more exposure to dermatological education. “I am lacking a bit of knowledge, knowledge concerning dermatological conditions that are often seen in general practice.” – GP resident (control group - interview 5) The selected cases from the E-learning programme contained common dermatological conditions, rare dermatological conditions, and life-threatening dermatological conditions. Clinical teachers indicated that the selected cases and the associated multiple-choice questions from the E-learning programme were encountered in the daily clinical practice, and therefore enabled the GP residents to acquaint a good balance in dermatological knowledge for various items of conditions. “My opinion about the content [E-learning programme] was that it did not consist of any rare conditions. The cases [dermatological conditions] are commonly encountered in the general practice. Those are relevant cases that you will actually see in general practice.” – Clinical teacher (access E-learning programme - interview 4) In contrast to the selected cases from the education sessions, the GP residents themselves determined the input of the cases. Therefore, it is possible that a rare, or life-threatening, or even a common dermatological condition could be missed.
GP residents indicated that exposure to dermatological cases in practice was a valuable learning experience. The recognition of clinical cases from the E-learning programme in practice was perceived as helpful, offered repetition, and confirmed their dermatological knowledge. It also enabled the use of that specific case to optimize their consultation and to consolidate their knowledge. “I am an active student, so I have to see something, I have to do something, and from that experience, I learn something, thus, this [E-learning programme] offers me a perfect solution. I would rather see it than that I have to take a book and read it. Thus, I prefer the situation [E-learning programme]; seeing things, checking, getting feedback, and more, practicing and recognizing.” – GP resident (intervention group - interview 2) GP residents in the control group elaborated on prior experiences with other E-learning programmes and prefer that E-learning programmes would provide authentically clinical cases related to daily clinical practice. This link between theory and practice was present during GP residents' education sessions and was valued. In these education sessions, GP residents met in a group and worked on several cases of a patient with a dermatological condition. The GP residents themselves have chosen these clinical cases from their own practice. However, in a number of selected cases, the diagnosis was not certain. GP residents felt uncertain and insecure about the possibility of missing diagnoses. Clinical teachers noted that especially the first year GP residents were looking for certainties, and missing information in the education session or in practice can be led to insecurity about their clinical eye. “It is common in the group first year GP residents are very eager not to miss any lessons, not all of them, but it is a repeating theme that plays a role by all of them, by some more than the others, but it still remains a repeating theme that comes back in the first year and which is often mentioned.” – Clinical teacher (access E-learning programme - interview 5) Within the E-learning programme, this insecurity was not present as feedback was provided via the programme to the GP residents. GP residents noted that by receiving feedback from the E-learning programme led to a deeper understanding of the different dermatological conditions. Clinical teachers stated that instantly triggering GP residents with questions and clinical cases combined with a self-chosen time and medium fits the GP resident of today perfectly. Thereby, linking the digital learning environment of the E-learning programme to the traditional teaching methods. “I think, to my opinion, that it [E-learning programme] fits the current GP resident perfectly. From my own experience, I see more often GP residents, who appreciate it when learning is interactive, in the format of a quiz, something they can actively participate to, as long as they are getting entertained. I think they value that the most, and to my opinion, I got the idea, that, the more serious learning like spending hours learning from a book, is, how do I have to put it, is something, that through the years has become less sexy.” – Clinical teacher (access E-learning programme - interview 4)
The aim of this study was to determine first-year GP residents' and clinical teachers' perceptions and the learning effect in GP residents of a dermatology E-learning programme versus traditional teaching methods. Therefore, we conducted a study that combines a quantitative and qualitative design. The quantitative data showed a significant learning effect through the E-learning programme in the intervention group. Due to the missing data of the pre-knowledge test in the control group, it was not possible to determine whether the learning effect of the E-learning programme differed from the traditional teaching methods. The post-knowledge test scores' showed little difference between the intervention- and the control group. Fransen and colleagues used the same validated question bank for the knowledge tests as this study, and therefore it is possible that the knowledge tests could not have connected well to the prior knowledge of the GP residents. Postgraduates, for instance, GP residents, have more experience in clinical practice than undergraduate medical students and have more existing (basic) dermatological knowledge due to more clinical experience. A number of participants in the interviews also expressed the lack of alignment of the tests. The qualitative data explored the learning mechanisms of GP residents. Four primary themes were identified via template analysis: format, agency, exposure to cases, and link with practice. Overall, GP residents valued learning through authentically clinical cases, which allow them to link theory to practice. GP residents indicated that the E-learning programme had a number of advantages, such as the uniform format, the accessibility, and incentive for regular use. On the other hand, GP residents receiving traditional teaching methods appreciated the dialogue and group discussion that enabled interaction and link of theory to practice. However, GP residents following traditional teaching methods stated that they could not acquire dermatological knowledge in their own phase and time, were not able to recall certain clinical cases, and wished for more exposure to dermatological conditions. Clinical teachers stated that these links between theory and practice would be easier to achieve through dialogue and discussion. Moreover, they indicated that the E-learning programme fits current GP residents perfectly because it enables linking the digital learning environment of the E-learning programme to the traditional teaching methods. Our results corroborate the ideas and findings in literature. , , , Silva and colleagues analyzed and evaluated the impact of a dermatology E-learning programme on students' learning. The E-learning programme combined with the traditional course (blended learning) significantly increased students' knowledge about dermatology, compared to students who solely received traditional teaching methods. Therefore, Silva and colleagues concluded that the use of an E-learning programme, in combination with traditional teaching methods, improved retention of dermatological knowledge. The qualitative data of this study also explored GP residents' learning mechanisms, and we found that all GP residents valued to learn through authentically clinical cases by linking theory to practice: the E-learning programme by providing a wide selection of clinical cases followed by links to websites and the traditional teaching methods by providing clinical cases that were selected by the GP residents themselves from their own clinical experience. By incorporating E-learning programmes in the residency training programme, GP residents benefit from the advantages of both methods. In accordance, Campbell and colleagues demonstrated that the use of virtual learning environments was associated with higher assignment marks than students who participated in face-to-face discussions. In the current study, the test scores of the intervention group improved significantly. Although the effect size was relatively small and non-significant between the post-knowledge test scores (control group versus intervention group), the qualitative data analysis suggested that the E-learning programme can be used as a meaningful learning activity, in addition to any teaching method. Thereby, methods can benefit from each other, and the E-learning programme will not repeat subjects of the traditional teaching methods used in the setting of this study but provides a deeper understanding of acquired dermatological knowledge. Some studies have failed to show a difference in learning effects between E-learning programmes and traditional teaching methods. - However, despite the lack of a significant difference in test results, students preferred the online learning module format to the traditional teaching method format. The online learning module took less time, and a clearer structure was provided. The interview data in this study also pointed out that GP residents appreciated the more uniform format, the constant availability of the teaching material, and the equal content for everyone. Limitations The findings of this study have to be seen in the light of some limitations. Firstly, the missing data of the pre-knowledge test of the control group made it impossible to determine whether the learning effect of the E-learning programme differed from the traditional teaching methods. Secondly, the E-learning programme was only evaluated in one context and setting (Maastricht University). Thirdly, participation was voluntary. Thereby, the possibility exists that motivated GP residents and clinical teachers participated in this study, however of the spring 2019 cohort, all residents participated. Fourthly, the relatively small sample size of the control and intervention groups during quantitative data collection. Given the average effect of the e-learning intervention on knowledge acquisition in undergraduate medical education, the power calculation suggested a sample size of 11 GP residents per group (22 GP residents in total). In the time allowed, we could only recruit 21 GP residents in total, so we are aware that our study is underpowered. Therefore, our results may not be generalizable to other areas and medical curricula. Despite the sample size, clear themes emerged from the qualitative data collection that is consistent with prior literature. Thus, this would suggest that the findings are of value for medical educators. Implications for research and/or practice GP residency programs could benefit from integrating E-learning technologies in their traditional teaching methods. Thereby, a link between theory and practice was enabled and eventually led to a higher level of dermatological knowledge and improved the dermatological diagnostic ability of GP's. The acquired insights could help to design effective E-learning programmes in which (digital) learning is supported for students as well as clinical teachers. For example, E-learning programmes are tailored to traditional teaching methods in which clinical teachers give GP residents guidance and structure by systematically describing dermatological conditions, and E-learning programmes provide instant stimulation via authentically clinical cases from practice. Thereby, giving GP residents structure to their clinical practice and eventually facilitate them to salve and understand dermatological conditions.
The findings of this study have to be seen in the light of some limitations. Firstly, the missing data of the pre-knowledge test of the control group made it impossible to determine whether the learning effect of the E-learning programme differed from the traditional teaching methods. Secondly, the E-learning programme was only evaluated in one context and setting (Maastricht University). Thirdly, participation was voluntary. Thereby, the possibility exists that motivated GP residents and clinical teachers participated in this study, however of the spring 2019 cohort, all residents participated. Fourthly, the relatively small sample size of the control and intervention groups during quantitative data collection. Given the average effect of the e-learning intervention on knowledge acquisition in undergraduate medical education, the power calculation suggested a sample size of 11 GP residents per group (22 GP residents in total). In the time allowed, we could only recruit 21 GP residents in total, so we are aware that our study is underpowered. Therefore, our results may not be generalizable to other areas and medical curricula. Despite the sample size, clear themes emerged from the qualitative data collection that is consistent with prior literature. Thus, this would suggest that the findings are of value for medical educators.
GP residency programs could benefit from integrating E-learning technologies in their traditional teaching methods. Thereby, a link between theory and practice was enabled and eventually led to a higher level of dermatological knowledge and improved the dermatological diagnostic ability of GP's. The acquired insights could help to design effective E-learning programmes in which (digital) learning is supported for students as well as clinical teachers. For example, E-learning programmes are tailored to traditional teaching methods in which clinical teachers give GP residents guidance and structure by systematically describing dermatological conditions, and E-learning programmes provide instant stimulation via authentically clinical cases from practice. Thereby, giving GP residents structure to their clinical practice and eventually facilitate them to salve and understand dermatological conditions.
The aim of the present study was to explore GP resident's knowledge retention and resident's and clinical teachers' perception of the learning value of a dermatology E-learning programme. This study showed that the use of an E-learning programme in dermatology for GP residents was perceived as a valuable learning tool. The E-learning programme resulted in an improvement in GP residents' dermatology knowledge. In addition, GP residents and clinical teachers perceived that the E-learning programme enabled GP residents to acquire dermatological knowledge in their own phase and time, to link theory to practice, and to recall clinical cases. Given the advantages of both teaching methods, E-learning programmes and traditional teaching methods should be combined to be of benefit for each other. Future studies should evaluate and focus on the perceptions of learners and teachers to enable a fit-for-purpose implementation of E-learning programmes in traditional teaching methods. Acknowledgements The authors wish to thank all GP residents and clinical teachers who participated in this study. Conflicts of Interest The authors declare that they have no conflict of interest.
The authors wish to thank all GP residents and clinical teachers who participated in this study.
The authors declare that they have no conflict of interest.
|
Effect of social media-based education on self-care status, health literacy, and glycated hemoglobin in patients with type 2 diabetes | c67a75ee-da2d-477a-8010-972e672130f5 | 11788279 | Patient Education as Topic[mh] | Diabetes is recognized as one of the most common chronic diseases worldwide . The World Health Organization (WHO) even refers to diabetes as a silent epidemic . It is predicted that the number of individuals affected by this disease will exceed 592 million by 2035 . The prevalence of diabetes is steadily increasing in various countries worldwide . Statistics in Iran also show that diabetes affects 7% of the country’s population, and with the current trend, experts estimate that this number will reach 6 million by 2030 . Type 2 diabetes (T2D) is the most prevalent form of diabetes globally, accounting for approximately 90% of diabetes cases . Uncontrolled diabetes is associated with the onset of disabilities, cardiovascular diseases, nephropathy, neuropathy, retinopathy, and patient mortality . Nevertheless, diligent monitoring and control of the disease can significantly delay or even prevent the onset of diabetes-related complications . Any care activity requires patient involvement, and self-care is considered a key component of diabetes management . The collection of behaviors adopted by individuals with diabetes or at risk of its complications, enabling them to manage the disease independently, is referred to as self-care activities . Proper adherence to self-care activities leads to benefits such as adequate blood sugar control, reduced complications, decreased hospitalizations, and improved quality of life . Previous studies have shown that most individuals with diabetes have limited self-care capabilities . In recent years, patients have become significantly more interested in taking responsibility for their disease management, but they need education . Therefore, education plays a crucial role in enhancing the self-care abilities of individuals with diabetes . Given the chronic and lifelong nature of the disease, self-care for these patients is a lengthy, complex, and challenging process . It requires significant lifestyle changes . This underscores the growing importance of self-care education . Despite their inherent potential, in-person educational sessions have limitations, including impractical transportation for low-income groups, time and location constraints, and a lack of human and financial resources . Recent advancements in information and communication technologies have created opportunities to facilitate learning through multimedia virtual education . Social media platforms, in particular, play a significant role in the prevention, care, and management of chronic diseases . Their effectiveness in managing various chronic diseases has been well documented . Despite the numerous benefits offered by various social messaging platforms, limited studies have explored their impact on managing type 2 diabetes in developing countries, revealing a gap between the use of these platforms and the understanding of the disease’s management . More than three-quarters of individuals with diabetes in low- to middle-income countries face significant challenges in accessing adequate healthcare and treatment . According to the World Health Organization, fewer than 50% of patients in these countries benefit from self-care activities . A systematic review and meta-analysis conducted in 2021 also indicated that the self-care score of Iranian diabetic patients is approximately 48.86%, which falls short of the desired level. Based on this finding, the researchers strongly recommend developing interventions aimed at improving self-care among individuals with diabetes . The findings of another study in Pakistan demonstrate gaps in individuals with diabetes knowledge and highlight the importance of self-care education. The study also identifies obstacles such as financial constraints and excessive work commitments that hinder patients’ ability to engage in self-care . Obstacles such as high costs, insufficient access to medication, and unequal distribution of healthcare services across different regions are among the challenges faced by developing countries . Among the factors affecting the prevention and control of diabetes is having sufficient awareness of the disease, the factors affecting its occurrence and how to prevent this disease. One of the factors that greatly affects the level of awareness and, as a result, more effective control and prevention of the disease, diabetes is health literacy, which is the degree to which individuals have the capacity and ability to acquire, process and understand information related to health and the services they need to make appropriate decisions about their health . Given the direct correlation between health literacy levels and self-care activities , we have undertaken the design and implementation of a study on self-care education in virtual spaces to investigate its effects on self-care status, health literacy, and HbA1c levels among individuals with T2D in Iran. Participants This educational intervention study was conducted on patients attending the diabetes clinic in Arak, Iran, from March 2022 to June 2022. The sampling method used in this study random sampling’. After acquiring the necessary permissions, eligible individuals were selected from among patients attending the clinic. They were introduced to the research objectives, and upon obtaining their informed consent, they were enrolled in the study. The inclusion criteria for this study were as follows: a definitive diagnosis of T2D by a specialist endocrinologist, an age range of 30–60 years, a minimum of 6 months of diabetes history, ownership of a smartphone, the ability to use social messaging apps such as Telegram, the absence of psychological disorders, literacy in reading and writing, and Iranian nationality. The exclusion criteria included unwillingness to continue participation, pregnancy, and hospitalization during the study. The required sample size was calculated based on a similar study, with an effect size of 0.7, α = 0.05, and β = 0.1, resulting in a total sample size of 34 participants per group . Considering the attrition rate (10%), 38 individuals were enrolled in each group. The power of the study was calculated to be 80%, ensuring an adequate sample size to detect meaningful differences. Intervention In this study, patients were randomly allocated to two groups: the intervention group ( n = 38) and the control group ( n = 38). After explaining the study and creating a channel on the Telegram messaging platform, the lead author invited all participants to join by sending them an invitation link. Access to this messaging platform was ensured for all participants. Necessary self-care instructions from credible and up-to-date sources were shared daily through the channel. The content, provided in Persian, included topics related to self-care, such as physical activity, blood glucose monitoring, types of diabetes, dietary guidelines, diabetes complications, and foot care. To ensure participant engagement with the study material, the researcher monitored daily attendance in the channel, tracked the number of views for each message, and contacted any participant who had not accessed the channel for more than 48 h to inquire about the reasons. If a participant failed to engage, they were excluded from the study. The instructional program lasted for 4 weeks. This step was done by the researcher. Patients in the control group did not receive any educational materials during the study. To adhere to ethical principles, an educational package was provided to these patients after the completion of the study. Instruments The researchers developed the self-care questionnaire used in this study by reviewing relevant literature and studies on diabetes. The items in the self-care section of the questionnaire allowed participants to report their self-care activities and the factors affecting their self-care behaviors over the past 7 days. The questionnaire consisted of two sections. The first section included demographic information (age, gender, marital status, education level, occupation, and social media usage) and clinical conditions (duration of diabetes, type of treatment received, and family history of diabetes). The second section contained 16 questions related to diabetes self-care behaviors, compliance diet, exercise, monitoring, treatment, and prevention of complications. The face validity of the questionnaire was established by presenting it to 10 faculty members with expertise in questionnaire design at Arak University of Medical Sciences. The Cronbach’s alpha coefficient for this questionnaire was 0.91, indicating good internal consistency. Health literacy was assessed using the Health Literacy for Iranian Adults (HELIA) questionnaire, designed by Montazeri et al. , which has demonstrated desirable validity and reliability. The Cronbach’s alpha values for the dimensions of the HELIA questionnaire ranged from 0.72 to 0.89, confirming the reliability of the various dimensions of the instrument . The questionnaire included 33 items assessing patients’ health literacy across five dimensions: access (6 questions), reading (4 questions), understanding (7 questions), evaluation (4 questions), and decision-making and application of health information (12 questions). All items, except those in the reading dimension, were designed on a 5-point Likert scale: (Always: 5, Most of the time: 4, Sometimes: 3, Rarely: 2, Not at all: 1). The items in the “reading skills” dimension used a modified Likert scale: (Very Easy: 5, Easy: 4, Neither easy nor difficult: 3, Difficult: 2, Very Difficult: 1). The raw score for each participant in each dimension was calculated by summing their responses on each item. To determine the total score, the sum of all item scores was divided by the number of dimensions . Scores ranging from 0 to 50 were considered inadequate, scores from 50.1 to 66 were categorized as semi-sufficient, scores from 66.1 to 84 were considered sufficient, and scores from 84.1 to 100 were classified as excellent . Participants completed the instrument for evaluation through self-reporting at both the beginning and end of the intervention. Glycosylated hemoglobin (HbA1c) In this study, HbA1c was assessed through laboratory testing at the beginning and end of the intervention. All tests were conducted in a centralized laboratory for all patients. Data analysis Data analysis was performed using descriptive statistics, Wilcoxon test, Fisher’s exact test, analysis of covariance (ANCOVA), and the Mann–Whitney U test to compare differences between groups. Data analysis was conducted using SPSS version 23, with a significance level set at 0.05. Ethical considerations The design and execution of this research adhered to the Helsinki Declaration. Before beginning the study, all participants were informed about the study’s objectives and were informed that they could withdraw from the study at any time without affecting their relationship with medical professionals or caregivers. The present study has been registered with the Ethics Committee in Research at Arak University of Medical Sciences under the reference number IR.ARAKMU.REC.1398.042. This educational intervention study was conducted on patients attending the diabetes clinic in Arak, Iran, from March 2022 to June 2022. The sampling method used in this study random sampling’. After acquiring the necessary permissions, eligible individuals were selected from among patients attending the clinic. They were introduced to the research objectives, and upon obtaining their informed consent, they were enrolled in the study. The inclusion criteria for this study were as follows: a definitive diagnosis of T2D by a specialist endocrinologist, an age range of 30–60 years, a minimum of 6 months of diabetes history, ownership of a smartphone, the ability to use social messaging apps such as Telegram, the absence of psychological disorders, literacy in reading and writing, and Iranian nationality. The exclusion criteria included unwillingness to continue participation, pregnancy, and hospitalization during the study. The required sample size was calculated based on a similar study, with an effect size of 0.7, α = 0.05, and β = 0.1, resulting in a total sample size of 34 participants per group . Considering the attrition rate (10%), 38 individuals were enrolled in each group. The power of the study was calculated to be 80%, ensuring an adequate sample size to detect meaningful differences. In this study, patients were randomly allocated to two groups: the intervention group ( n = 38) and the control group ( n = 38). After explaining the study and creating a channel on the Telegram messaging platform, the lead author invited all participants to join by sending them an invitation link. Access to this messaging platform was ensured for all participants. Necessary self-care instructions from credible and up-to-date sources were shared daily through the channel. The content, provided in Persian, included topics related to self-care, such as physical activity, blood glucose monitoring, types of diabetes, dietary guidelines, diabetes complications, and foot care. To ensure participant engagement with the study material, the researcher monitored daily attendance in the channel, tracked the number of views for each message, and contacted any participant who had not accessed the channel for more than 48 h to inquire about the reasons. If a participant failed to engage, they were excluded from the study. The instructional program lasted for 4 weeks. This step was done by the researcher. Patients in the control group did not receive any educational materials during the study. To adhere to ethical principles, an educational package was provided to these patients after the completion of the study. The researchers developed the self-care questionnaire used in this study by reviewing relevant literature and studies on diabetes. The items in the self-care section of the questionnaire allowed participants to report their self-care activities and the factors affecting their self-care behaviors over the past 7 days. The questionnaire consisted of two sections. The first section included demographic information (age, gender, marital status, education level, occupation, and social media usage) and clinical conditions (duration of diabetes, type of treatment received, and family history of diabetes). The second section contained 16 questions related to diabetes self-care behaviors, compliance diet, exercise, monitoring, treatment, and prevention of complications. The face validity of the questionnaire was established by presenting it to 10 faculty members with expertise in questionnaire design at Arak University of Medical Sciences. The Cronbach’s alpha coefficient for this questionnaire was 0.91, indicating good internal consistency. Health literacy was assessed using the Health Literacy for Iranian Adults (HELIA) questionnaire, designed by Montazeri et al. , which has demonstrated desirable validity and reliability. The Cronbach’s alpha values for the dimensions of the HELIA questionnaire ranged from 0.72 to 0.89, confirming the reliability of the various dimensions of the instrument . The questionnaire included 33 items assessing patients’ health literacy across five dimensions: access (6 questions), reading (4 questions), understanding (7 questions), evaluation (4 questions), and decision-making and application of health information (12 questions). All items, except those in the reading dimension, were designed on a 5-point Likert scale: (Always: 5, Most of the time: 4, Sometimes: 3, Rarely: 2, Not at all: 1). The items in the “reading skills” dimension used a modified Likert scale: (Very Easy: 5, Easy: 4, Neither easy nor difficult: 3, Difficult: 2, Very Difficult: 1). The raw score for each participant in each dimension was calculated by summing their responses on each item. To determine the total score, the sum of all item scores was divided by the number of dimensions . Scores ranging from 0 to 50 were considered inadequate, scores from 50.1 to 66 were categorized as semi-sufficient, scores from 66.1 to 84 were considered sufficient, and scores from 84.1 to 100 were classified as excellent . Participants completed the instrument for evaluation through self-reporting at both the beginning and end of the intervention. In this study, HbA1c was assessed through laboratory testing at the beginning and end of the intervention. All tests were conducted in a centralized laboratory for all patients. Data analysis was performed using descriptive statistics, Wilcoxon test, Fisher’s exact test, analysis of covariance (ANCOVA), and the Mann–Whitney U test to compare differences between groups. Data analysis was conducted using SPSS version 23, with a significance level set at 0.05. The design and execution of this research adhered to the Helsinki Declaration. Before beginning the study, all participants were informed about the study’s objectives and were informed that they could withdraw from the study at any time without affecting their relationship with medical professionals or caregivers. The present study has been registered with the Ethics Committee in Research at Arak University of Medical Sciences under the reference number IR.ARAKMU.REC.1398.042. Two patients withdrew from the study due to a lack of interest, and one patient did not complete the questionnaire. Additionally, three participants were excluded due to inadequate responses on the Telegram messaging platform. Consequently, data analysis was performed with 70 participants. Of these participants, 38.6% were male, and 15.7% resided in rural areas. The demographic characteristics of the study participants are presented in . The results indicated that there was no statistically significant difference between the mean scores of reading, access, understanding, evaluation, decision-making, and overall health literacy scores in the two groups before the intervention ( p > 0.05). However, after the intervention, the mean scores for reading, understanding, evaluation, decision-making, and total health literacy in the intervention group were significantly higher than in the control group ( p < 0.05). The post-intervention access score did not show a statistically significant difference between the two groups ( p > 0.05) . The results revealed that the changes in the mean scores for reading, access, understanding, evaluation, decision-making, and total health literacy in the intervention group were statistically significant and showed an increase ( p < 0.05) . According to the results, there was no statistically significant difference in the mean scores of the self-care questionnaire (follow the diet, physical activity, treatment, control, and prevention of complications) between the control and intervention groups before the intervention ( p > 0.05). However, after the intervention, the scores for control and prevention of complications differed significantly between the groups, with the intervention group showing higher scores than the control group ( p < 0.05). The average scores for the other dimensions did not show statistically significant differences after the intervention ( p > 0.05) . The results indicated that the changes in the average scores for control and prevention of complications in the intervention group were statistically significant and increased ( p < 0.05). However, in the control group, the changes in the overall self-care score dimensions were not significant ( p > 0.05) . The results showed a significant difference in the mean total score of the self-care questionnaire between the two groups ( p < 0.05). To control for confounding variables, analysis of covariance (ANCOVA) was conducted on the baseline self-care total score, revealing a significant difference in the mean self-care total score after the intervention between the two groups, with higher scores observed in the intervention group ( p < 0.05). Additionally, the changes in the total self-care score in the intervention group were both statistically significant and increasing ( p < 0.05) . The indicated no statistically significant difference in the mean HbA1c levels before and after the intervention between the groups ( p > 0.05). However, the Wilcoxon test results showed that the mean HbA1c levels in the intervention group statistically significantly ( p < 0.05). This difference was not statistically significant in the control group . The results of the correlation analysis indicated that changes in the scores of self-care dimensions did not have a statistically significant relationship with changes in the scores of health literacy dimensions in either group ( p > 0.05) . This study aimed to investigate the impact of social media-based education on self-care, health literacy, and glycated hemoglobin levels in individuals with type 2 diabetes. The findings of this study demonstrated that education through social networks effectively enhances health literacy among diabetic patients. Initially, no statistically significant difference was found in health literacy scores between the intervention and control groups. However, after receiving the education, the intervention group showed significant improvements in reading, understanding, evaluation, decision-making, behavior, and overall health literacy scores. In contrast, the control group did not exhibit such improvements. These findings are consistent with previous studies highlighting the potential of online interventions in enhancing health literacy among individuals with chronic diseases . Health literacy refers to the knowledge and skills necessary to make informed health decisions. It plays a crucial role in managing chronic diseases such as diabetes. Low health literacy is associated with poorer adherence to treatment, worsening medical conditions, increased hospitalization, and higher healthcare costs . A meta-analysis has shown that health literacy has a small but significant effect on glycemic management in patients . It is important to note that effective self-care in patients is closely linked to adequate health literacy . Our study findings emphasize the effectiveness of virtual education through the Telegram messaging app in improving patients’ health literacy. A cross-sectional study by Moulaei et al. demonstrated that Iranian users obtain more information about their health conditions through social media platforms such as WhatsApp, Telegram, and Instagram. Based on these findings, it is recommended to incorporate social media-based education into routine care programs for individuals with T2D. Our study findings demonstrated that the mean total score of the self-care questionnaire in the intervention group was significantly higher than that in the control group. These findings are supported by previous studies. Biglar Chopoghlo et al. also showed that self-care education on social media platforms is associated with increased self-efficacy scores in Iranian adolescent girls with type 1 diabetes. Similarly, Alanzi et al. found that using the WhatsApp social network led to a significant increase in knowledge and self-efficacy among T2D patients in the intervention group compared to the control group. Tang et al. reported in a cross-sectional study that the use of innovative technologies, such as mobile phones and multimedia tools, is effective in improving self-care activities in individuals with T2D. These results indicate that social networks should be considered a valuable tool for education due to their benefits, such as accessibility, ease of use, access to up-to-date health information, cost reduction, and the possibility of online interactions in educational programs . In our study, although the overall score of the self-care questionnaire after the intervention was significantly higher in the social media education group compared to the control group, the scores of some self-care dimensions, including physical activity and treatment, in the control group also showed significant improvement after the study, compared to the beginning of the study. This finding may be attributed to the self-care methods or education received through physicians and other communication channels in the control group. Additionally, this result highlights the complexity of the relationship between health literacy and self-care in diabetic patients. Further research is needed to clarify the influential factors and potential confounding variables in self-care among T2D patients. Existing evidence also supports this finding. For example, Hosseinzadeh et al. demonstrated in their study that both virtual and face-to-face education led to increased self-care in pregnant women with gestational diabetes, compared to the control group. However, no significant difference was found between the two intervention groups. Aligholipour et al. in a similar study conducted in Iran, demonstrated that although both social network-based and face-to-face education led to increased scores in patients’ self-care activities, no significant difference in outcomes was found between the two intervention groups. Despite the potential effectiveness of education through modern technologies, such as mobile phones, being comparable to face-to-face education , researchers emphasize that the interpersonal relationship between the nurse and the patient is a fundamental principle of nursing care. Therefore, virtual education should not entirely replace face-to-face education . On the other hand, using virtual networks presents risks, such as exposure to incorrect information, misinterpretation of medical results, and distractions caused by advertisements on these platforms . The findings of the present study indicate that changes in self-care dimension scores do not have a statistically significant relationship with changes in health literacy scores, which contradicts the findings of İlhan et al. . Maleki Chollou et al. also demonstrated in their study that there is a positive and almost direct relationship between health literacy dimensions and self-care behaviors in T2D patients across all dimensions. In contrast with our findings, Eyüboğlu and Schulz found no association between health literacy and self-care behaviors in diabetes patients. This discrepancy may be attributed to differences in the implementation methods of educational interventions, measurement tools, and target groups. Some existing evidence also links patients’ demographic characteristics to their self-care activities . Education remains an integral part of the care of patients with T2D . İlhan et al. demonstrated in their study that individuals with low health literacy have difficulties in self-care, understanding health information, and adhering to treatment. Therefore, early identification of knowledge deficiencies regarding self-care can lead to better treatment adherence and delay in the onset of complications . Social networks allow individuals to utilize educational resources with lower cost and without time and location constraints, while expanding their social interactions . RobatSarpooshi et al. also reported an average level of self-care scores in Iranian patients in a study aimed at investigating the relationship between health literacy and self-care behaviors. Furthermore, the results of this study showed that increasing levels of education and awareness among patients were associated with increased adherence to self-care behaviors. The findings of these studies indicate that developing an educational program, especially a self-care program, can be effective in enhancing patient self-efficacy. To our surprise, we did not find any statistically significant relationship between changes in self-care behaviors and changes in HbA1C levels in the intervention group. This finding contradicts some previous studies that reported a positive correlation between improved self-care behaviors and HbA1C levels . Nevertheless, the results of some studies support our findings. Lee et al. demonstrated in a study that assessed the effectiveness of self-care education using a mobile phone program in T2D patients that changes in HbA1c levels between the control and intervention groups at week 26 were not significant . Contradictory results regarding the impact of self-care behaviors observed in different countries may be influenced by socio-economic and cultural differences among these countries and the variable levels of self-care status in individuals with diabetes. Additionally, differences in follow-up periods and duration of interventions are other potential reasons for this discrepancy. In our study, although a significant difference in the mean HbA1c levels between the intervention and control groups was not observed, HbA1c levels decreased after the intervention. Some studies have reported significant improvements in blood glucose control in diabetic patients with online interventions, while others have reported limited or no effect . Possible reasons for this difference in reported results may include the variability in follow-up durations and different laboratory methods for measuring patients’ blood glucose levels. One of the strengths of this study is its examination of the impact of social media-based education on health literacy and self-care in individuals with T2D. However, there are several limitations to this study. First, self-care is a complex construct that may be influenced by various confounding factors. For instance, diabetic patients have access to multiple sources of disease-related education, which could potentially affect the study outcomes. Additionally, the relatively short duration of the follow-up and the intervention represents another limitation. Therefore, it is recommended that future research investigate the long-term effects of virtual education on glycemic control and the prevention of diabetes-related complications, with extended follow-up periods. Also, some variables such as personal motivation, family support have an effect on patient self-care that were not measured in this study. Therefore, it is recommended that studies be conducted with the exclusion of these confounding factors. The results of this study have shown that social networks, such as Telegram, can effectively provide self-care education to a large number of participants, offering broader and more accessible education compared to individual, in-person methods. Given the limited number of healthcare providers in developing countries, this approach is cost-effective, as it does not impose a significant financial burden when compared to other traditional teaching methods. Additionally, considering the varied work schedules and personal commitments of participants, this platform eliminates the time constraints typically associated with in-person education. |
Molecular Diagnostic Yield of Exome Sequencing in Patients With Congenital Hydrocephalus | 78d051e0-cbf2-4ec9-a95e-bc5788af3d1a | 10665979 | Pathology[mh] | Congenital hydrocephalus (CH) is a primary form of hydrocephalus characteristically marked by pathological expansion of the cerebral ventricles. CH is present in approximately 1 in 1000 live births and is among the most common neurodevelopmental disorders (NDDs) and structural brain disorders. In contrast with other NDDs, CH is often diagnosed postnatally or within the first year of life by radiological identification of cerebral ventriculomegaly and additional clinical and phenotypic features, such as macrocephaly. Prenatal methods depend largely on radiological identification of ventriculomegaly due to practical constraints of in utero diagnostics. Identification of severe ventriculomegaly is the principal (and often sole) diagnostic feature in prenatal CH cases. CH is a primary (idiopathic) disease and, by definition, lacks an identifiable clinical antecedent. , Although clinical causes are unclear, the hallmark pathogenic cerebrospinal fluid accumulation can be associated with cerebral malformations such as aqueductal stenosis. , Recent efforts to elucidate genetic factors have contributed to evidence of rare associated genetic variants in CH. , Genetic factors are thought to contribute to both syndrome-associated and nonsyndromic (sporadic) CH ; however, although variants in more than 100 genes have been associated with syndromic forms of hydrocephalus, few have been associated with nonsyndromic forms. , Despite efforts to elucidate genetic causes of nonsyndromic CH, the current body of associated variants accounts for only 5% of cases. It has been estimated that more than 40% of CH cases have genetic origins, and, thus, the vast majority of these cases remain to be elucidated. Several studies have used exome sequencing (ES) in individuals with CH with varying results; some of these studies have identified associated variants in as many as 78% to 90% of cases. , Due to the complex heterogeneity and implications of rare genetic variants in CH, using ES as a diagnostic tool might help uncover genetic factors associated with CH and aid in clinical management of patients. Recently, 2 separate recommendations were released in support of ES as a first-line diagnostic test for individuals with NDDs. Srivastava et al used meta-analytic techniques to support ES as a high-yield diagnostic test for patients with global developmental delay (DD), intellectual disability (ID), and autism spectrum disorder. Subsequently, the American College of Medical Genetics and Genomics released clinical guidelines recommending ES for those with ID, DD, or congenital anomalies. Neither recommendation included CH as an NDD of interest. In this study, we focused on CH as a potential addition to these recommendations by testing the hypothesis that the diagnostic yield of ES in patients with CH is comparable to that of the previous guidelines , establishing ES as a first-tier test for other NDDs. This systematic review and meta-analysis was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses ( PRISMA ) reporting guideline. We also used the Meta-Analysis of Observational Studies in Epidemiology ( MOOSE ) reporting guideline. Search Strategy and Information Sources We searched PubMed, Cochrane Library, and Google Scholar to find relevant studies published in English using the following search terms: congenital hydrocephalus , ventriculomegaly , cerebral ventriculomegaly , primary ventriculomegaly , fetal ventriculomegaly , prenatal ventriculomegaly , molecular analysis , genetic cause , genetic etiology , genetic testing , exome sequencing , whole exome sequencing , genome sequencing , microarray , microarray analysis , and copy number variants . See eTable 1 in for the combinations of these search terms. Due to the advent of ES in late 2009 and early 2010, the search retrieved articles published between January 1, 2010, and the search date, April 10, 2023. Citations retrieved were screened using Covidence. Eligibility Criteria and Selection Process We included studies with CH or CH-like probands. The distinction between CH vs CH-like probands was determined by individual study author description. CH probands were explicitly described by the study authors as receiving a diagnosis of hydrocephalus. CH-like probands were fetal cases denoted only as receiving a diagnosis of severe cerebral ventriculomegaly, often precluded from a confirmed diagnosis of hydrocephalus due to prenatal constraints. Studies that only included cases of mild or moderate ventriculomegaly were not considered suggestive of CH , and were excluded. Studies eligible for inclusion included those with at least 10 probands with CH or severe ventriculomegaly who were undergoing ES. Exclusion criteria included studies performing ES with fewer than 10 probands with CH or ventriculomegaly, studies that did not discuss diagnostic yield, and studies not using ES (ie, using another genetic test such as chromosomal microarray or gene panel test). To assess for inclusion criteria, search results were screened for relevance of titles and abstracts, and articles identified as relevant underwent full-text review. Following full-text review, articles meeting all eligibility criteria were selected for final inclusion (eTable 2 in ). Risk of Bias Assessment In compliance with current recommendations for meta-analyses of proportions with fewer than 10 studies, , risk of bias was assessed qualitatively. We referenced the Risk of Bias in Nonrandomized Studies of Interventions tool. Data Collection and Data Items Data from included studies were populated into an extraction table by 2 independent reviewers (A.B.W.G. and N.H.M.). Data extracted included number of probands with positive ES (defined as pathogenic and likely pathogenic variants detected, for most articles) and the number of probands with negative ES (defined as variants of uncertain significance, likely benign, benign, or no variants detected, for most articles). Any discrepancies were resolved by consensus of the 2 reviewers. Grading of ventriculomegaly was determined by study authors and largely followed the convention of mild (10-12 mm), moderate (13-15 mm) and severe (≥16 mm). Secondary patient data were extracted for designation of patients into various subgroups for subsequent statistical analysis, including (1) clinical feature and diagnosis (CH or ventriculomegaly), (2) syndromic or nonsyndromic case, and (3) history of consanguinity. A proband’s clinical features were categorized as suggestive of syndromic CH according to (1) phenotype-based diagnosis of an associated syndrome and/or (2) implication of associative variation in a syndrome-associated gene. Phenotype-based diagnoses were determined by respective study authors, and syndrome-associated genes were denoted as such either by study author mention or by cross-reference with a list of known CH syndrome–associated genes. , , , If an individual lacked either sign of syndromic CH, the patient was designated to the isolated, nonsyndromic group. Statistical Analysis Using a random-effects model for meta-analyses of single proportions, the primary outcome (overall diagnostic yield) and subsequent comparisons of interest were evaluated. Freeman-Tukey double arcsine transformation was applied as the variance-stabilizing method for meta-analysis of single proportions, and a pooled diagnostic yield and 95% CI were calculated. As secondary comparisons, diagnostic yields were estimated for probands on the basis of (1) clinical feature (CH or ventriculomegaly); (2) isolated, nonsyndromic features; and (3) reported consanguinity in proband’s family. Interstudy heterogeneity was estimated by an I 2 statistic, with P < .05 denoting statistical significance. All analyses were conducted using SUMARI (JBI). Data analysis was conducted in April 2023. We searched PubMed, Cochrane Library, and Google Scholar to find relevant studies published in English using the following search terms: congenital hydrocephalus , ventriculomegaly , cerebral ventriculomegaly , primary ventriculomegaly , fetal ventriculomegaly , prenatal ventriculomegaly , molecular analysis , genetic cause , genetic etiology , genetic testing , exome sequencing , whole exome sequencing , genome sequencing , microarray , microarray analysis , and copy number variants . See eTable 1 in for the combinations of these search terms. Due to the advent of ES in late 2009 and early 2010, the search retrieved articles published between January 1, 2010, and the search date, April 10, 2023. Citations retrieved were screened using Covidence. We included studies with CH or CH-like probands. The distinction between CH vs CH-like probands was determined by individual study author description. CH probands were explicitly described by the study authors as receiving a diagnosis of hydrocephalus. CH-like probands were fetal cases denoted only as receiving a diagnosis of severe cerebral ventriculomegaly, often precluded from a confirmed diagnosis of hydrocephalus due to prenatal constraints. Studies that only included cases of mild or moderate ventriculomegaly were not considered suggestive of CH , and were excluded. Studies eligible for inclusion included those with at least 10 probands with CH or severe ventriculomegaly who were undergoing ES. Exclusion criteria included studies performing ES with fewer than 10 probands with CH or ventriculomegaly, studies that did not discuss diagnostic yield, and studies not using ES (ie, using another genetic test such as chromosomal microarray or gene panel test). To assess for inclusion criteria, search results were screened for relevance of titles and abstracts, and articles identified as relevant underwent full-text review. Following full-text review, articles meeting all eligibility criteria were selected for final inclusion (eTable 2 in ). In compliance with current recommendations for meta-analyses of proportions with fewer than 10 studies, , risk of bias was assessed qualitatively. We referenced the Risk of Bias in Nonrandomized Studies of Interventions tool. Data from included studies were populated into an extraction table by 2 independent reviewers (A.B.W.G. and N.H.M.). Data extracted included number of probands with positive ES (defined as pathogenic and likely pathogenic variants detected, for most articles) and the number of probands with negative ES (defined as variants of uncertain significance, likely benign, benign, or no variants detected, for most articles). Any discrepancies were resolved by consensus of the 2 reviewers. Grading of ventriculomegaly was determined by study authors and largely followed the convention of mild (10-12 mm), moderate (13-15 mm) and severe (≥16 mm). Secondary patient data were extracted for designation of patients into various subgroups for subsequent statistical analysis, including (1) clinical feature and diagnosis (CH or ventriculomegaly), (2) syndromic or nonsyndromic case, and (3) history of consanguinity. A proband’s clinical features were categorized as suggestive of syndromic CH according to (1) phenotype-based diagnosis of an associated syndrome and/or (2) implication of associative variation in a syndrome-associated gene. Phenotype-based diagnoses were determined by respective study authors, and syndrome-associated genes were denoted as such either by study author mention or by cross-reference with a list of known CH syndrome–associated genes. , , , If an individual lacked either sign of syndromic CH, the patient was designated to the isolated, nonsyndromic group. Using a random-effects model for meta-analyses of single proportions, the primary outcome (overall diagnostic yield) and subsequent comparisons of interest were evaluated. Freeman-Tukey double arcsine transformation was applied as the variance-stabilizing method for meta-analysis of single proportions, and a pooled diagnostic yield and 95% CI were calculated. As secondary comparisons, diagnostic yields were estimated for probands on the basis of (1) clinical feature (CH or ventriculomegaly); (2) isolated, nonsyndromic features; and (3) reported consanguinity in proband’s family. Interstudy heterogeneity was estimated by an I 2 statistic, with P < .05 denoting statistical significance. All analyses were conducted using SUMARI (JBI). Data analysis was conducted in April 2023. Study Selection From the initial pool of 498 search results, 91 duplicate articles were removed before screening, and an additional 18 manually selected articles were added to the screening pool (eFigure in ). At the title and abstract level, of the 425 articles screened, 357 were excluded. Of the 68 articles remaining for full-text review, 59 articles were excluded due to insufficient number of probands, use of genetic testing other than ES, lack of mention of molecular diagnostic yield, lack of specificity to CH, or overlap of cohort with another included study. At this stage, 10 additional articles , , , , , , , , , were potentially eligible for inclusion but did not report data specific to CH or ventriculomegaly and/or ES yield. Corresponding authors of such articles were contacted via email by 1 of the reviewers (A.B.W.G.) with a request for supplemental data. Of the authors contacted, 1 provided supplemental data; however, the number of CH and ventriculomegaly probands was insufficient for inclusion, and the study was excluded. For the remaining 9 reports, none of the authors contacted provided supplemental data. Subsequently, 9 studies , , , , , , , , remained for final inclusion. One of the studies was a secondary analysis of 2 cohorts. , Risk of bias was low for all included studies except for 1 domain grade of serious risk or no information for 1 study due to the nature of the report as a conference abstract (eTable 3 in ). Study Characteristics Individual study characteristics and demographics of the cohort of 538 probands from all 9 studies , , , , , , , , were tabulated as reported and as available in the original studies . Overall, extracted cohorts included individuals with isolated and nonsyndromic CH, syndromic CH, and ventriculomegaly. Five studies , , , , included only CH probands, 1 study included both CH and ventriculomegaly probands, and 3 studies , , included only ventriculomegaly probands. All studies looking exclusively at cases with ventriculomegaly , , were fetal studies. All ventriculomegaly cases included had severe ventriculomegaly, except for probands from 1 included study, which only reported a combined, inextricable yield for moderate and severe ventriculomegaly cases. Eight studies , , , , , , , included whole or partial cohorts with isolated and/or nonsyndromic cases allowing for targeted estimation of diagnostic yield. The remaining study with only syndromic CH individuals was excluded from the corresponding subcomparison. Four studies , , , reported patient-level consanguinity data for the entire cohort, and the remaining 5 studies , , , , that did not report consanguinity were excluded from the subcomparison. Results of Syntheses To pool diagnostic yield from studies with disparate methods and/or populations, a random-effects meta-analysis was implemented. For the pooled cohort of 538 CH and ventriculomegaly probands from 9 studies, , , , , , , , , the random-effects methods revealed a diagnostic yield of 37.9% (95% CI, 20.0%-57.4%; I 2 = 90.1) ( A). For CH probands alone, the yield was higher (43.2%; 95% CI, 19.6%-68.4%; I 2 = 92.8) than the pooled CH and ventriculomegaly yield (37.9%) and higher than the yield of ventriculomegaly alone (27.9%; 95% CI, 4.4%-59.4%; I 2 = 75.8) ( B and C). For isolated and/or nonsyndromic cases, the yield for CH and ventriculomegaly probands was higher (21.3%; 95% CI, 12.8%-31.0%; I 2 = 55.7) ( A) than for CH probands alone (18.8%; 95% CI,15.0%-22.90%; I 2 = 0.2) ( B). For CH and ventriculomegaly probands with history of consanguinity, the yield was higher (76.3%; 95% CI, 65.1%-86.1%; I 2 = 0) ( A) than for those without reported consanguinity (16.2%; 95% CI, 12.2%-20.5%; I 2 = 0) ( B). From the initial pool of 498 search results, 91 duplicate articles were removed before screening, and an additional 18 manually selected articles were added to the screening pool (eFigure in ). At the title and abstract level, of the 425 articles screened, 357 were excluded. Of the 68 articles remaining for full-text review, 59 articles were excluded due to insufficient number of probands, use of genetic testing other than ES, lack of mention of molecular diagnostic yield, lack of specificity to CH, or overlap of cohort with another included study. At this stage, 10 additional articles , , , , , , , , , were potentially eligible for inclusion but did not report data specific to CH or ventriculomegaly and/or ES yield. Corresponding authors of such articles were contacted via email by 1 of the reviewers (A.B.W.G.) with a request for supplemental data. Of the authors contacted, 1 provided supplemental data; however, the number of CH and ventriculomegaly probands was insufficient for inclusion, and the study was excluded. For the remaining 9 reports, none of the authors contacted provided supplemental data. Subsequently, 9 studies , , , , , , , , remained for final inclusion. One of the studies was a secondary analysis of 2 cohorts. , Risk of bias was low for all included studies except for 1 domain grade of serious risk or no information for 1 study due to the nature of the report as a conference abstract (eTable 3 in ). Individual study characteristics and demographics of the cohort of 538 probands from all 9 studies , , , , , , , , were tabulated as reported and as available in the original studies . Overall, extracted cohorts included individuals with isolated and nonsyndromic CH, syndromic CH, and ventriculomegaly. Five studies , , , , included only CH probands, 1 study included both CH and ventriculomegaly probands, and 3 studies , , included only ventriculomegaly probands. All studies looking exclusively at cases with ventriculomegaly , , were fetal studies. All ventriculomegaly cases included had severe ventriculomegaly, except for probands from 1 included study, which only reported a combined, inextricable yield for moderate and severe ventriculomegaly cases. Eight studies , , , , , , , included whole or partial cohorts with isolated and/or nonsyndromic cases allowing for targeted estimation of diagnostic yield. The remaining study with only syndromic CH individuals was excluded from the corresponding subcomparison. Four studies , , , reported patient-level consanguinity data for the entire cohort, and the remaining 5 studies , , , , that did not report consanguinity were excluded from the subcomparison. To pool diagnostic yield from studies with disparate methods and/or populations, a random-effects meta-analysis was implemented. For the pooled cohort of 538 CH and ventriculomegaly probands from 9 studies, , , , , , , , , the random-effects methods revealed a diagnostic yield of 37.9% (95% CI, 20.0%-57.4%; I 2 = 90.1) ( A). For CH probands alone, the yield was higher (43.2%; 95% CI, 19.6%-68.4%; I 2 = 92.8) than the pooled CH and ventriculomegaly yield (37.9%) and higher than the yield of ventriculomegaly alone (27.9%; 95% CI, 4.4%-59.4%; I 2 = 75.8) ( B and C). For isolated and/or nonsyndromic cases, the yield for CH and ventriculomegaly probands was higher (21.3%; 95% CI, 12.8%-31.0%; I 2 = 55.7) ( A) than for CH probands alone (18.8%; 95% CI,15.0%-22.90%; I 2 = 0.2) ( B). For CH and ventriculomegaly probands with history of consanguinity, the yield was higher (76.3%; 95% CI, 65.1%-86.1%; I 2 = 0) ( A) than for those without reported consanguinity (16.2%; 95% CI, 12.2%-20.5%; I 2 = 0) ( B). Our systematic review and meta-analysis of 9 studies , , , , , , , , combined 538 individuals with the defining feature of CH and/or primary ventriculomegaly. Compared with a recent meta-analysis heralding ES as a diagnostic test in patients with other NDDs (36%), the diagnostic yield from our CH-specific study (37.9%) was similar. Our calculated yield was higher for patients with only confirmed CH vs patients with only confirmed ventriculomegaly. For all patients with isolated and/or nonsyndromic cases, the yield was lower than for the pooled cohort. Furthermore, the yield was higher for those with a history of consanguinity than without. In sum, our results support expanding the recommendation of ES as a top-tier clinical test to CH diagnostics. Despite becoming more accessible, ES still remains fairly cost-intensive and time-intensive. Thus, clinicians may lean toward implementing ES for cases that are more likely to harbor genetic factors, such as (1) confirmed hydrocephalic cases; (2) cases suggestive of isolated and/or nonsyndromic CH; and (3) cases with other factors associated with mendelian CH forms, such as history of consanguinity. Our results support implementation of ES in these cases with high mendelian risk. Additionally, we argue that, as ES becomes more cost-efficient and time-efficient, ES should also be considered as a first-tier test for CH in all patients, including (1) unconfirmed prenatal cases suggestive of hydrocephalus, (2) cases with signs of syndromic associations, and (3) cases without risk factors such as consanguinity. Our evidence and reasoning are as follows. First, for prenatal cases, detection of severe ventriculomegaly can be, but is not always, translated to a diagnosis of hydrocephalus. Implementing ES in prenatal CH-suggestive cases would allow for clearer delineation of benign and nonspecific vs pathogenic ultrasonographic findings. Furthermore, earlier CH diagnosis would allow for earlier postnatal treatment and, perhaps, better clinical outcomes. Allowing families and clinicians more time to provide tailored, informed care—emotionally, financially, clinically, and otherwise—for a newborn with a known CH diagnosis could increase quality of life for all involved. In our analysis, all ventriculomegaly cases were severe and prenatal. The diagnostic yield for CH cases was higher than for ventriculomegaly cases; however, the yield for ventriculomegaly alone (27.9%) is still considerable (when compared with the 36% yield in the previous guideline for ES in NDDs), and so we recommend that ES also be considered in prenatal cases with isolated, severe ventriculomegaly suggestive of CH. Second, the question of ES for syndromic CH surrounds the necessity, not the efficacy, of this comprehensive test as opposed to a more targeted, less expensive, and faster option (eg, gene panel). For most syndromic cases, an associated variant could likely be detected by a gene panel of the more than 100 known syndrome-associated genes ; however, there is still value in ES for syndromic cases. Although genetic and clinical efforts to elucidate syndromic forms have been successful relative to nonsyndromic forms, , proper detection and understanding of phenotypic presentation of syndromic forms can be nebulous. For example, some individuals with identified variants in known syndromic genes can clinically present as isolated CH cases. This phenomenon highlights the uncertainty in detecting CH syndromes. In addition to phenotypic uncertainty, CH syndromes can also present with genetic uncertainty and heterogeneity. One study noted that some patients with variants in the known CH-associated gene, L1CAM , had a negative prenatal targeted gene panel and later received a diagnosis by ES only. Offering ES for patients with symptoms suggestive of syndromic CH, even those with established associated variants in syndrome-associated genes, can result in identification of additional, potentially clinically informative, associated variants in nonsyndromic genes. Thus, ES for syndromic CH can provide a more comprehensive and informative snapshot than panels targeted for syndromic genes alone. Targeted diagnostic panels may currently be a more efficient method for strictly syndromic CH forms, but ES continues to be a competitive alternative due to the heterogeneity of syndrome-associated forms. Third, although our analysis suggests that ES in patients with history of consanguinity offers a disproportionately higher yield (76.3%) than for patients without (16.2%), patients without history of consanguinity still have a considerable yield and should not be excluded from these precise diagnostic methods. Furthermore, risk factors may not always be reported or detected; therefore, the absence of reported risk factors should not necessarily serve as a deterrent against offering ES. Thus, due to the clinical and genetic heterogeneity of CH, the substantial diagnostic yields in all analyzed subgroups, and the increasing accessibility of ES, we urge clinicians to consider ES as the premier clinical diagnostic test for all CH patients. According to recent practice guidelines, genetic testing might not be offered for patients with CH without comorbid NDD. Many patients with CH would have to wait to develop an additional NDD for which ES is recommended (eg, ID or DD) before receiving genetic testing. This current paradigm would result in delayed care for patients with CH. Because CH can be diagnosed earlier than ID or DD, testing all CH probands would allow for a timely genetic diagnosis with potential improvement in clinical outcomes. Beyond diagnostics, increasing rates of CH sequencing will accelerate identification of CH genes and pathomechanisms and allow for new translational discoveries such as the association of variants with clinically relevant variables like neurosurgical outcome. Limitations This study has limitations. Although risk of bias was low in most domains for the included studies, one exception was the inclusion of a non–peer reviewed conference abstract with serious risk. However, because risk of bias was low in all other domains, and the abstract contained all necessary data for inclusion, we included this report. This meta-analysis included cases with CH or CH-like features, namely ventriculomegaly. Included studies denoting only ventriculomegaly as a clinical feature looked exclusively at fetal cases. We included cases from these fetal ventriculomegaly studies as having CH-like features because severe ventriculomegaly is often the sole feature for prenatal diagnosis of CH. To limit nonspecific and benign cases, we included cases with severe ventriculomegaly and excluded cases denoted as mild, moderate, or ungraded and unspecified. We excluded fetal cases with mild or moderate ventriculomegaly because the majority (>90% of mild cases) of these have been shown to be associated with typical neurodevelopmental outcomes and are nonspecific to CH. The inclusion of ventriculomegaly cases in this CH meta-analysis raises certain concerns. Although we attempted to limit nonspecific and benign cases, including severe ventriculomegaly may have introduced some nonspecific cases into our study. However, the number of ventriculomegaly cases was a fraction of the total cohort (43 of 538 probands), and we ran additional analyses to examine CH and ventriculomegaly alone . Another consideration is that Schindewolf et al presented an inextricable group of moderate and severe ventriculomegaly cases. We included this group in our meta-analysis. Furthermore, Schindewolf and colleagues used a grading scale skewed toward severe ventriculomegaly (mild, 10-11 mm; moderate, 12-15 mm; or severe, ≥15 mm). However, given the high yield of that individual study cohort, (42.9%), the inclusion of potentially nonspecific moderate cases and skewing toward more severe ventriculomegaly grades did not hamper the diagnostic yield in comparison with the standard overall yield set by our meta-analysis (37.9%). Our study is also limited by the designation of syndromic vs isolated and/or nonsyndromic cases. We used multiple data sources, including study author genotypic and phenotypic report and our own cross-reference of associated variants with a list of known syndrome-associated genes, to categorize cases. However, definitive distinction between the 2 CH forms is difficult, especially since additional syndromic symptoms may develop over time and may not present at the time of clinical assessment. This is an added consideration when grading prenatal cases, which can present as isolated but may develop syndromic symptoms postnatally. Our categorization of patients depended solely on data available at the time of clinical assessment and study publication and is thus limited. Additionally, we identified a low number of studies and/or patients in certain subanalyses. For example, only 2 studies , were included in the subanalysis of patients without consanguinity . Furthermore, 1 study had only 1 patient with ventriculomegaly (with negative ES), and thus was ineligible for the ventriculomegaly-specific subanalysis . This study has limitations. Although risk of bias was low in most domains for the included studies, one exception was the inclusion of a non–peer reviewed conference abstract with serious risk. However, because risk of bias was low in all other domains, and the abstract contained all necessary data for inclusion, we included this report. This meta-analysis included cases with CH or CH-like features, namely ventriculomegaly. Included studies denoting only ventriculomegaly as a clinical feature looked exclusively at fetal cases. We included cases from these fetal ventriculomegaly studies as having CH-like features because severe ventriculomegaly is often the sole feature for prenatal diagnosis of CH. To limit nonspecific and benign cases, we included cases with severe ventriculomegaly and excluded cases denoted as mild, moderate, or ungraded and unspecified. We excluded fetal cases with mild or moderate ventriculomegaly because the majority (>90% of mild cases) of these have been shown to be associated with typical neurodevelopmental outcomes and are nonspecific to CH. The inclusion of ventriculomegaly cases in this CH meta-analysis raises certain concerns. Although we attempted to limit nonspecific and benign cases, including severe ventriculomegaly may have introduced some nonspecific cases into our study. However, the number of ventriculomegaly cases was a fraction of the total cohort (43 of 538 probands), and we ran additional analyses to examine CH and ventriculomegaly alone . Another consideration is that Schindewolf et al presented an inextricable group of moderate and severe ventriculomegaly cases. We included this group in our meta-analysis. Furthermore, Schindewolf and colleagues used a grading scale skewed toward severe ventriculomegaly (mild, 10-11 mm; moderate, 12-15 mm; or severe, ≥15 mm). However, given the high yield of that individual study cohort, (42.9%), the inclusion of potentially nonspecific moderate cases and skewing toward more severe ventriculomegaly grades did not hamper the diagnostic yield in comparison with the standard overall yield set by our meta-analysis (37.9%). Our study is also limited by the designation of syndromic vs isolated and/or nonsyndromic cases. We used multiple data sources, including study author genotypic and phenotypic report and our own cross-reference of associated variants with a list of known syndrome-associated genes, to categorize cases. However, definitive distinction between the 2 CH forms is difficult, especially since additional syndromic symptoms may develop over time and may not present at the time of clinical assessment. This is an added consideration when grading prenatal cases, which can present as isolated but may develop syndromic symptoms postnatally. Our categorization of patients depended solely on data available at the time of clinical assessment and study publication and is thus limited. Additionally, we identified a low number of studies and/or patients in certain subanalyses. For example, only 2 studies , were included in the subanalysis of patients without consanguinity . Furthermore, 1 study had only 1 patient with ventriculomegaly (with negative ES), and thus was ineligible for the ventriculomegaly-specific subanalysis . Our findings underscore the high yield of ES in CH. Given that the percentage of patients receiving a molecular diagnosis by ES in CH is comparable to that of the current recommendation for other NDDs, we conclude that ES should also be recommended as a first-tier clinical diagnostic test for CH. |
Research on the Medicinal Chemistry and Pharmacology of | c1c42fff-917e-47df-a42a-622d2ee163f8 | 11171555 | Pharmacology[mh] | Taxus × media ( Taxus × media Rehder), a member of the genus Taxus in the family Taxaceae, is a plant that combines medicinal, timber, and ornamental values, thus possessing significant economic and research importance . As a species within the Taxus genus, it plays a crucial role in the ecosystem and is highly valued in the field of medicine due to its rich bioactive compounds . A thorough understanding of the natural habitat and cultivation of Taxus × media is vital for conserving this precious species and developing its medicinal resources. The natural distribution and ecological habits of Taxus × media provide a theoretical basis for biodiversity and ecosystem integrity. This species primarily grows in subtropical mountain forests, and its unique growth environment and ecological characteristics are crucial in formulating effective cultivation and conservation strategies . With the increasing depletion of natural resources and environmental pressures, Taxus × media faces numerous challenges, including habitat loss, overharvesting, and the impact of climate change. This species is particularly susceptible to temperature fluctuations, which can affect the metabolic processes essential for synthesizing key bioactive compounds. Research has shown that under varying temperature conditions, Taxus × media expresses different proteins related to the synthesis of paclitaxel and other secondary metabolites. These proteins are involved in the precursor supply for the paclitaxel biosynthesis pathway, such as 1-deoxy-D-xylulose-5-phosphate synthase (DXS) and 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXR). This differential expression may explain the variations in the content of bioactive compounds under different environmental conditions. Additionally, experiments have demonstrated that environmental factors, such as temperature, significantly influence the content of these bioactive compounds. For instance, the content of paclitaxel found in Taxus × media is more than five times that of Taxus mairei . Furthermore, changes in precipitation patterns and an increased frequency of extreme weather events can further stress natural populations . Understanding these dynamics is crucial for developing effective conservation strategies to ensure the survival of Taxus × media under changing climatic conditions . Therefore, a comprehensive understanding of its cultivation status, assessment of the survival of wild populations, and exploration of effective conservation methods are essential for the protection of Taxus × media and for maintaining biodiversity and ecological balance. The phytochemical constituents of Taxus × media form the material basis and core of its medicinal value. Paclitaxel, its most famous compound, has been widely used clinically for its antitumor activities . However, recent studies have revealed that Taxus × media contains various other bioactive compounds . These compounds’ extraction, preparation, analysis, and biological activity assessment are current research hotspots. A deep understanding of the structure and function of these chemical components is crucial for developing new drugs and treatment strategies. Pharmacological studies on Taxus × media are not limited to its anticancer effects but also include exploring its potential as an anti-inflammatory, antimicrobial, and in other fields . These studies reveal the potential effects and mechanisms of action of Taxus × media compounds and provide a theoretical basis for new drug development. Integrating clinical trial and laboratory research data can enhance the understanding of these compounds’ pharmacological properties and future applications. After revealing its rich medicinal potential, the challenge becomes how to utilize these compounds sustainably. Addressing the scarcity of Taxus × media medicinal resources and conservation challenges, finding synthetic substitutes, optimizing extraction processes, or developing new biotechnological methods are essential to sustainable utilization. This review aims to thoroughly explore various research directions of Taxus × media , including its natural origin, cultivation status, chemical constituents, pharmacological activities, and application prospects, providing a comprehensive reference for future medical research and development. 2.1. Origin and Geographical Distribution Globally, plants of the Taxaceae family, commonly known as yews, primarily grow in Asia, North America, and Europe, with a total of approximately five genera and 23 species. In China, the Taxus genus is represented by four native species, one variety, and one introduced species, along with a hybrid. The native species are Taxus chinensis (Pilger.) Rehd, Taxus wallichiana Zucc., Taxus cuspidata Siebold and Zucc and Taxus yunnanensis W.C.Cheng and L.K.Fu. The variety is Taxus mairei (Lemée and H.Lév.) S.Y.Hu, while the introduced species is Taxus × media Rehder . Taxus × media is a species within the Taxus genus of the Taxaceae family, discovered in 1918 by American scholars as an artificial hybrid between the female Taxus cuspidata and the male Taxus baccata . Originally native to North America, specifically Canada and the United States, this plant has now been introduced and cultivated in several countries around the world, including China, India, Argentina, and South Korea, with a primary distribution in the Asian region . shows the geographical distribution of Taxus × media in China. In the mid-1990s, China introduced Taxus × media from Canada. In 1995, the Sichuan Provincial Academy of Forestry first introduced it into Sichuan province. After nearly 15 years of elite breeding and cultivation trials, a new variety with a high paclitaxel content—Chuanlin Taxus × media —was developed, along with an efficient cultivation technology system for Taxus × media . According to authoritative institutions, the biological characteristics of the introduced Taxus × media have remained stable without any mutations. It is now cultivated in various locations in China, including the Sichuan, Guangxi, and Shandong provinces . Through selective breeding of Taxus × media , more than ten cultivars have been developed to date. The cultivation of this plant is primarily concentrated in areas with suitable altitude and climatic conditions, preferring deep, loose, slightly acidic sandy loam soils. In Taiwan, Taxus × media is a rare plant found in only six to seven locations. In the Himalayan region, it is typically found at altitudes between 1200 and 2000 m, while in Eastern China, most sites are below 1200 m. In Southern Vietnam, Taxus × media exists in several small sub-populations, such as in the Lam Dong and Khanh Hoa provinces . Taxus × media is one of the yew species approved by the U.S. FDA for the extraction of paclitaxel . Artificial cultivation has become an important sustainable development strategy due to the high demand for medicinal components such as paclitaxel in Taxus × media in recent years . In several provinces of China, particularly in its natural distribution areas, numerous artificial cultivation bases of Taxus × media have been established. These bases alleviate the harvesting pressure on wild yew populations and provide opportunities to study their growth characteristics and medicinal value. Furthermore, the Chinese government and environmental organizations are implementing conservation measures to protect this precious species, including establishing protected areas and restricting commercial harvesting activities. Prior to the pressing challenges posed by climate change, the Taxus species has faced significant threats due to human activities, primarily from overharvesting for their valuable medicinal compounds and habitat destruction for agricultural expansion. With the onset of climate change, these species are now confronting additional pressures such as increased temperatures and altered precipitation patterns, which can lead to physiological stress and reduced reproductive success . These climatic factors can significantly shift the phenology and distribution of Taxus species, potentially leading to mismatches in the ecosystem interactions that sustain them. Moreover, the increased frequency of extreme weather events, such as droughts and heavy rains, can further destabilize their fragile habitats, leading to a higher risk of population decline. In light of these challenges, conservation efforts must not only address immediate threats from human exploitation but also incorporate adaptive strategies to mitigate the impact of climate change . This includes the establishment of genetic reservoirs and assisted migration to areas predicted to remain climatically stable. Understanding the full scope of these threats is crucial to developing effective conservation strategies that ensure the long-term survival of the Taxus species in their natural habitats . 2.2. Artificial Cultivation and Comparative Analysis Artificial cultivation of Taxus × media has become crucial due to the soaring demand for taxanes like paclitaxel, known for its anticancer properties. Optimized cultivation systems, which include controlled environments like greenhouses and open-field plantations, are tailored to enhance growth by modifying factors such as watering schedules, nutrient application, and pest management strategies. These systems crucially influence the plant’s secondary metabolite profile by allowing precise control over environmental conditions like light intensity, temperature, and soil pH. Studies, such as those exploring the synergistic effects of cyclodextrins and methyl jasmonate, have shown that such controlled environments can lead to an increase in taxane yield by up to 30% compared to traditional cultivation methods . While existing research on the impact of climate and geographical factors on Taxus × media is limited, comparative studies within the Taxaceae family suggest that these factors significantly affect metabolite synthesis in similar species. For example, variations in temperature and light exposure were found to alter the concentration of baccatin III, a precursor to paclitaxel, by as much as 25% . This indicates potential areas for future research, where exploring how specific environmental adjustments affect Taxus × media could lead to the development of more efficient cultivation strategies. Comparative analysis between the wild and cultivated Taxus × media has revealed significant differences in their morphological and chemical profiles. Cultivated plants often exhibit higher growth rates and denser foliage, which are believed to contribute to their enhanced metabolite production. However, there are trade-offs, as these plants sometimes show lower diversity in certain secondary metabolites, which could affect their overall medicinal value. Advancing our understanding of these differences is essential for optimizing cultivation practices and ensuring the sustainability of Taxus × media resources . Globally, plants of the Taxaceae family, commonly known as yews, primarily grow in Asia, North America, and Europe, with a total of approximately five genera and 23 species. In China, the Taxus genus is represented by four native species, one variety, and one introduced species, along with a hybrid. The native species are Taxus chinensis (Pilger.) Rehd, Taxus wallichiana Zucc., Taxus cuspidata Siebold and Zucc and Taxus yunnanensis W.C.Cheng and L.K.Fu. The variety is Taxus mairei (Lemée and H.Lév.) S.Y.Hu, while the introduced species is Taxus × media Rehder . Taxus × media is a species within the Taxus genus of the Taxaceae family, discovered in 1918 by American scholars as an artificial hybrid between the female Taxus cuspidata and the male Taxus baccata . Originally native to North America, specifically Canada and the United States, this plant has now been introduced and cultivated in several countries around the world, including China, India, Argentina, and South Korea, with a primary distribution in the Asian region . shows the geographical distribution of Taxus × media in China. In the mid-1990s, China introduced Taxus × media from Canada. In 1995, the Sichuan Provincial Academy of Forestry first introduced it into Sichuan province. After nearly 15 years of elite breeding and cultivation trials, a new variety with a high paclitaxel content—Chuanlin Taxus × media —was developed, along with an efficient cultivation technology system for Taxus × media . According to authoritative institutions, the biological characteristics of the introduced Taxus × media have remained stable without any mutations. It is now cultivated in various locations in China, including the Sichuan, Guangxi, and Shandong provinces . Through selective breeding of Taxus × media , more than ten cultivars have been developed to date. The cultivation of this plant is primarily concentrated in areas with suitable altitude and climatic conditions, preferring deep, loose, slightly acidic sandy loam soils. In Taiwan, Taxus × media is a rare plant found in only six to seven locations. In the Himalayan region, it is typically found at altitudes between 1200 and 2000 m, while in Eastern China, most sites are below 1200 m. In Southern Vietnam, Taxus × media exists in several small sub-populations, such as in the Lam Dong and Khanh Hoa provinces . Taxus × media is one of the yew species approved by the U.S. FDA for the extraction of paclitaxel . Artificial cultivation has become an important sustainable development strategy due to the high demand for medicinal components such as paclitaxel in Taxus × media in recent years . In several provinces of China, particularly in its natural distribution areas, numerous artificial cultivation bases of Taxus × media have been established. These bases alleviate the harvesting pressure on wild yew populations and provide opportunities to study their growth characteristics and medicinal value. Furthermore, the Chinese government and environmental organizations are implementing conservation measures to protect this precious species, including establishing protected areas and restricting commercial harvesting activities. Prior to the pressing challenges posed by climate change, the Taxus species has faced significant threats due to human activities, primarily from overharvesting for their valuable medicinal compounds and habitat destruction for agricultural expansion. With the onset of climate change, these species are now confronting additional pressures such as increased temperatures and altered precipitation patterns, which can lead to physiological stress and reduced reproductive success . These climatic factors can significantly shift the phenology and distribution of Taxus species, potentially leading to mismatches in the ecosystem interactions that sustain them. Moreover, the increased frequency of extreme weather events, such as droughts and heavy rains, can further destabilize their fragile habitats, leading to a higher risk of population decline. In light of these challenges, conservation efforts must not only address immediate threats from human exploitation but also incorporate adaptive strategies to mitigate the impact of climate change . This includes the establishment of genetic reservoirs and assisted migration to areas predicted to remain climatically stable. Understanding the full scope of these threats is crucial to developing effective conservation strategies that ensure the long-term survival of the Taxus species in their natural habitats . Artificial cultivation of Taxus × media has become crucial due to the soaring demand for taxanes like paclitaxel, known for its anticancer properties. Optimized cultivation systems, which include controlled environments like greenhouses and open-field plantations, are tailored to enhance growth by modifying factors such as watering schedules, nutrient application, and pest management strategies. These systems crucially influence the plant’s secondary metabolite profile by allowing precise control over environmental conditions like light intensity, temperature, and soil pH. Studies, such as those exploring the synergistic effects of cyclodextrins and methyl jasmonate, have shown that such controlled environments can lead to an increase in taxane yield by up to 30% compared to traditional cultivation methods . While existing research on the impact of climate and geographical factors on Taxus × media is limited, comparative studies within the Taxaceae family suggest that these factors significantly affect metabolite synthesis in similar species. For example, variations in temperature and light exposure were found to alter the concentration of baccatin III, a precursor to paclitaxel, by as much as 25% . This indicates potential areas for future research, where exploring how specific environmental adjustments affect Taxus × media could lead to the development of more efficient cultivation strategies. Comparative analysis between the wild and cultivated Taxus × media has revealed significant differences in their morphological and chemical profiles. Cultivated plants often exhibit higher growth rates and denser foliage, which are believed to contribute to their enhanced metabolite production. However, there are trade-offs, as these plants sometimes show lower diversity in certain secondary metabolites, which could affect their overall medicinal value. Advancing our understanding of these differences is essential for optimizing cultivation practices and ensuring the sustainability of Taxus × media resources . Taxus × media , a plant of significant medicinal value, has been a focal point of research in the fields of medicinal chemistry and botany . The whole plant of Taxus × media is rich in phytochemical constituents, primarily comprising paclitaxel and its derivatives, various alkaloids, and flavonoids . Analysis of the different medicinal parts of Taxus × media (such as bark, fresh leaves, seeds, and seed coat) has identified over 800 compounds, covering 11 subcategories , with their main components and contents presented in . 3.1. Chemical Composition of Different Parts of Taxus × media Different parts of Taxus × media contain distinct phytochemical components. Using targeted metabolomics, studies have shown that tissues such as bark, fresh leaves, seeds, and receptacles contain a variety of substances, with different parts potentially housing unique compounds . shows the representative phytochemical constituents of different parts of Taxus × media . 3.1.1. Chemical Composition of Taxus × media Leaves and Twigs Taxus × media is an evergreen shrub which has abundant leaves and twigs that are easily harvested. Studies reveal significant differences in compound distribution among different tissues of Taxus × media , particularly in fresh leaves where taxane compounds are most abundant, resulting in a higher overall concentration of these components . Additionally, the leaves and twigs of Taxus × media contain other chemical components such as flavonoids, volatile oils, and inositol . The leaves and twigs of Taxus × media are rich in a variety of taxane components, including paclitaxel, cephalomannine, 10-deacetylbaccatin III (10-DAB) as the representative components . Moreover, other diterpenoids have been identified, including 7,9-deacetyltaxinine, 9-deacetyltaxinine A, d-deacetyltaxinine B, taxinine-11, 2-deacetoxytaxinine E, 2-deacetoxytaxuspine C, taxagifin, and 12-oxide . Among these, 10-DAB is the most abundant at approximately 0.75 mg/g, followed by paclitaxel, cephalomannine, and 10-deacetyl paclitaxel (10-DAT) at concentrations of approximately 0.6 mg/g, 0.5 mg/g, and 0.126 mg/g, respectively . Further studies have shown that in Taxus × media of 15 years old, the paclitaxel content in one-year-old leaves and twigs can reach up to 0.02–0.04%, and even higher 10-DAB content at 0.08–0.15% . The leaves and twigs are also rich in flavonoid compounds, including kaempferol, aromadendrin, apigenin, sciadopitysin, ginkgetin, luteolin, quercetin, amentoflavone, and others . Phenolic compounds are also abundant, containing 4-hydroxy-benzaldehyde, p-hydroxybenzoic acid, and pyrocatechol . The total flavonoid extraction from Taxus × media leaves and twigs is about 128.1 mg/g of dry weight . Polysaccharides are present in varying amounts in the leaves and twigs, ranging from 1.1781 to 3.0115%, with an average content of 2.1367% . The molecular weight of these polysaccharides is about 59.2 kilodaltons (kD), composed of approximately 365 sugar residues, with a ratio of rhamnose, arabinose, mannose, glucose, and galactose at approximately 4:6:1:1:4 . The leaves and twigs of Taxus × media are rich in volatile oils. The volatile oil content varies, with the leaf portion being higher, making it the primary source for extraction. The volatile oil extracted from Taxus × media leaves and twigs contains over 20 components, with a yield ranging from 4.21% to 9.70% . Using steam distillation and gas chromatography–mass spectrometry (GC–MS) to investigate the essential oils in the leaves and twigs of Taxus × media , 30 chemical components were identified, accounting for 97.14% of the total components. The major compounds include cis -3-hexen-1-ol, 3,3-dimethylacrylic acid, and pentenyl ethyl alcohol, constituting 29.42%, 18.13%, and 15.61%, respectively. The least abundant, trans-2-hexenol, accounts for only 0.24% . A GC–MS analysis of essential oil components in the leaves identified 34 compounds, including benzene propanenitrile, 1,4-dioxane-2,3-diol, 3-bromo-3-methyl-butyric acid, and 1-hydroxy-2-butanone . Other components present in the leaves and twigs include β-sitosterol and eicosan-10-o . β-Sitosterol content is around 1% when extracted using petroleum ether and ethyl acetate . The needles of Taxus × media , being easily regenerative and abundant in resources, yield compounds such as paclitaxel, taxol B, 10-DAB, 10β-hydroxybutyrate-10-deacetyl taxol, taxinine A, taxinine B, 5α-cinnamoyltaxicin I, and 5α-decinnamoyltaxagifine upon extraction . The highest content is 10-DAB with approximately 1.47% and paclitaxel at about 0.308% in the extracts . In addition to these compounds, the needles of Taxus × media contain a variety of flavonoid compounds, primarily flavones, dihydroflavones, and biflavones. Concentrations of specific flavonoids within the needle extracts are notably high, where myricetin 3-o-rutinoside constitutes 15.1% of the total flavonoids; quercetin 3-o-rutinoside accounts for 57.3% of the total flavonoids; kaempferol 3-o-rutinoside makes up 7.1% of the total flavonoids; quercetin 7-o-glucoside comprises 3.2% of the total flavonoids; and kaempferol 7-o-glucoside represents 0.7% of the total flavonoids. Additionally, concentrations of myricetin (0.5%), quercetin (0.4%), and kaempferol (0.4%) have also been detected within the extracts . 7,7”-dimethoxyagastisflavone (DMGF) is a biflavone isolated from the needles of Taxus × media . 3.1.2. Chemical Composition of Taxus × media Barks The crude extract of Taxus × media bark contains some low-polarity substances such as chlorophyll, sterols, and resins, which can be removed with low-polarity organic solvents (like petroleum ether). Analysis of specific components in the bark and root bark of Taxus × media reveals that the content of paclitaxel is relatively low in these parts . Apart from paclitaxel, the bark also contains 10-DAB, taxadiene, taxinine, and taxol B. Among the taxane compounds, paclitaxel is the most abundant, followed by 7-epi-10-deacetylbaccatin, cephalomannine, and 10-DAB, with paclitaxel content at about 0.0439%. The concentrations of paclitaxel and 10-DAB in the bark are higher than in the leaves and twigs . These compounds also have significant medicinal value and play an important role in future drug development and research. A study focusing on the chemical and protein components of the stem and bark of Taxus × media employed metabolomic and proteomic approaches. Phytochemical analysis indicated a higher concentration of paclitaxel in the phloem, and 10 critical enzymes involved in paclitaxel biosynthesis were identified, most of which are primarily produced in the phloem. Further in vitro and in vivo studies showed that TmMYB3 ( Taxus media MYB3) participates in the biosynthesis of paclitaxel by activating the expression of taxane 2α-O-benzoyltransferase (TBT) and taxadiene synthase (TS). The phloem-specific TmMYB3 is involved in the transcriptional regulation of paclitaxel biosynthesis, potentially explaining the phloem-specific accumulation of paclitaxel . 3.1.3. Chemical Composition of Taxus × media Seeds The primary chemical components in the seeds of Taxus × media are taxane compounds. A variety of taxane compounds isolated from Taxus × media seeds include paclitaxel, taxinine A, baccatin III, 9-deacetyltaxinine, 9-deacetyltaxinine E, 2-deacetyltaxinine, taxezopidine G, 2-deacetoxytaxinine J, and 2-deacetoxytaxuspine C . Seeds are also rich in flavonoid compounds, exceeding the seed coat and bark content, with substances like naringenin, aromadendrin, galanin, epigallocatechin, and gallocatechin . Polyprenols have been isolated from the Taxus × media seeds and identified using techniques such as high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing the cancer cells to undergo programmed cell death (apoptosis). Recent research continues to discover and identify new compounds in different parts of Taxus × media , enhancing our understanding of the plant’s chemical diversity and the basis of its bioactive substances. The extraction and analysis of these compounds typically employ techniques like liquid chromatography, gas chromatography, mass spectrometry, and nuclear magnetic resonance. These techniques allow for accurate identification of compound structures, analysis of their biological activity, and exploration of their medicinal potential. As the extraction and analysis technologies advance, even more new compounds are being isolated and identified from Taxus × media , further expanding our understanding of the chemical diversity of this valuable plant. 3.2. The Impact of Origin and Growth Duration on the Variation of Active Components in Taxus × media The variation in the active components of Taxus × media is influenced by multiple factors, including geographic origin, growth period, soil conditions, temperature, and humidity, all of which interact to shape the metabolic profile of the plant. Specifically, geographic origin and growth period are pivotal, significantly affecting the quantity of taxane compounds. These findings highlight how variations in the content of paclitaxel and the total amount of three key taxanes—10-DAB, cephalomannine, and baccatin III—are influenced by regional differences and growth years. Cluster analysis has revealed that the paclitaxel content in the Fuzhou region of Fujian is distinct from other areas, showing a uniquely high concentration of paclitaxel. Compared to regions such as the Jiangsu Province, Shaanxi Province, Zhejiang Province, and Yunnan Province, Taxus × media in the Fuzhou region exhibits significant regional specificity in its metabolite profile. This is likely due to the region’s unique subtropical climate, which provides abundant rainfall and warmth, fostering the accumulation of this metabolite. In contrast, the total amount of the three main taxanes—specifically, 10-DAB, cephalomannine, and paclitaxel—differs substantially in the Kaifeng region of Henan, which does not follow the general trend observed in other regions. Notably, the highest levels of 10-DAB and the combined levels of these three taxanes have been found in four-year-old Taxus × media from Kaifeng, Henan. In this case, 10-DAB levels were 3.83 times higher than in the Fujian region, cephalomannine levels in Lishui, Zhejiang, were 4.59 times higher than in Liaoning Benxi, and paclitaxel levels in Lishui, Zhejiang, were 2.17 times higher than in Liaoning Benxi. These variations are statistically significant ( p < 0.01), emphasizing the pronounced impact of geographic origin and growth period on the biosynthesis of these crucial chemotherapeutic agents . Moreover, Taxus × media can be successfully cultivated across most parts of China, with Paclitaxel content in some introduced regions even surpassing that of its original habitat . Further studies reveal distinct concentrations of key taxane compounds when comparing the same growth durations across different regions, specifically Hainan and Sichuan. For three-year-old twigs, Hainan consistently shows higher concentrations of paclitaxel compared to Sichuan, while the 10-DAB levels are generally higher in Sichuan across multiple growth years. Cephalomannine content varies, with no consistent pattern between the two regions across the growth periods analyzed ( p > 0.05). The distinct biochemical profiles of Taxus × media from these regions reflect the influence of local environmental conditions such as climate, soil type, and altitude. Hainan, characterized by its tropical monsoon climate, offers a unique environment with abundant rainfall and a longer growing season that potentially enhances the biosynthesis of certain taxanes. This environment contrasts with that of Sichuan, where the varying altitude and distinct seasonal changes might affect plant metabolism differently. These regional differences are crucial in understanding the variations in metabolite synthesis within Taxus × media , and provide essential insights into the optimal cultivation practices and harvest timings for maximizing the yield of these valuable compounds . 3.3. Comparison of Chemical Components between Taxus × media and Other Taxus Species A comparative analysis of metabolites from different Taxus species identified 2246 metabolites. The study revealed significant differences in metabolites and compounds among various Taxus species. The highest contents of paclitaxel and cephalomannine are in Taxus cuspidata , followed by Taxus × media ; however, the content of 10-DAB is lowest in Taxus × media at less than half of the other species. The highest content of baccatin III is in Taxus yunnanensis , slightly lower in Taxus mairei , while the 10-DAT content is generally low. The total content of the five taxane compounds is highest in Taxus chinesensis , followed by Taxus × media , Taxus yunnanensis , and Taxus mairei . The average content of cephalomannine in the leaves of Taxus wallichiana is higher than in the stem, contrary to Taxus × media ; in both species, the leaf content of paclitaxel is higher than in the stem, yet the stem and leaf content of paclitaxel in Taxus × media is not lower than in Taxus wallichiana . Paclitaxel, as the most well-known component, is an effective anticancer drug, particularly significant in treating ovarian and breast cancer. Different parts of Taxus × media are extensively used for extracting paclitaxel . Compared to other yew species, Taxus × media primarily excels in its significant paclitaxel content . According to a report by the Sichuan Academy of Forestry Sciences, the average paclitaxel content in Taxus × media is 0.0385%, which is 7.3 times that of Taxus brevifolia , 4.6 times that of Taxus yunnanensis , and 4.1 times that of Taxus mairei . In a study comparing five domestic yew species with Taxus × media , the branch and leaf paclitaxel content of Taxus yunnanensis was found to be 0.0100%, while in Taxus × media , it was 0.0130% . Different parts of Taxus × media contain distinct phytochemical components. Using targeted metabolomics, studies have shown that tissues such as bark, fresh leaves, seeds, and receptacles contain a variety of substances, with different parts potentially housing unique compounds . shows the representative phytochemical constituents of different parts of Taxus × media . 3.1.1. Chemical Composition of Taxus × media Leaves and Twigs Taxus × media is an evergreen shrub which has abundant leaves and twigs that are easily harvested. Studies reveal significant differences in compound distribution among different tissues of Taxus × media , particularly in fresh leaves where taxane compounds are most abundant, resulting in a higher overall concentration of these components . Additionally, the leaves and twigs of Taxus × media contain other chemical components such as flavonoids, volatile oils, and inositol . The leaves and twigs of Taxus × media are rich in a variety of taxane components, including paclitaxel, cephalomannine, 10-deacetylbaccatin III (10-DAB) as the representative components . Moreover, other diterpenoids have been identified, including 7,9-deacetyltaxinine, 9-deacetyltaxinine A, d-deacetyltaxinine B, taxinine-11, 2-deacetoxytaxinine E, 2-deacetoxytaxuspine C, taxagifin, and 12-oxide . Among these, 10-DAB is the most abundant at approximately 0.75 mg/g, followed by paclitaxel, cephalomannine, and 10-deacetyl paclitaxel (10-DAT) at concentrations of approximately 0.6 mg/g, 0.5 mg/g, and 0.126 mg/g, respectively . Further studies have shown that in Taxus × media of 15 years old, the paclitaxel content in one-year-old leaves and twigs can reach up to 0.02–0.04%, and even higher 10-DAB content at 0.08–0.15% . The leaves and twigs are also rich in flavonoid compounds, including kaempferol, aromadendrin, apigenin, sciadopitysin, ginkgetin, luteolin, quercetin, amentoflavone, and others . Phenolic compounds are also abundant, containing 4-hydroxy-benzaldehyde, p-hydroxybenzoic acid, and pyrocatechol . The total flavonoid extraction from Taxus × media leaves and twigs is about 128.1 mg/g of dry weight . Polysaccharides are present in varying amounts in the leaves and twigs, ranging from 1.1781 to 3.0115%, with an average content of 2.1367% . The molecular weight of these polysaccharides is about 59.2 kilodaltons (kD), composed of approximately 365 sugar residues, with a ratio of rhamnose, arabinose, mannose, glucose, and galactose at approximately 4:6:1:1:4 . The leaves and twigs of Taxus × media are rich in volatile oils. The volatile oil content varies, with the leaf portion being higher, making it the primary source for extraction. The volatile oil extracted from Taxus × media leaves and twigs contains over 20 components, with a yield ranging from 4.21% to 9.70% . Using steam distillation and gas chromatography–mass spectrometry (GC–MS) to investigate the essential oils in the leaves and twigs of Taxus × media , 30 chemical components were identified, accounting for 97.14% of the total components. The major compounds include cis -3-hexen-1-ol, 3,3-dimethylacrylic acid, and pentenyl ethyl alcohol, constituting 29.42%, 18.13%, and 15.61%, respectively. The least abundant, trans-2-hexenol, accounts for only 0.24% . A GC–MS analysis of essential oil components in the leaves identified 34 compounds, including benzene propanenitrile, 1,4-dioxane-2,3-diol, 3-bromo-3-methyl-butyric acid, and 1-hydroxy-2-butanone . Other components present in the leaves and twigs include β-sitosterol and eicosan-10-o . β-Sitosterol content is around 1% when extracted using petroleum ether and ethyl acetate . The needles of Taxus × media , being easily regenerative and abundant in resources, yield compounds such as paclitaxel, taxol B, 10-DAB, 10β-hydroxybutyrate-10-deacetyl taxol, taxinine A, taxinine B, 5α-cinnamoyltaxicin I, and 5α-decinnamoyltaxagifine upon extraction . The highest content is 10-DAB with approximately 1.47% and paclitaxel at about 0.308% in the extracts . In addition to these compounds, the needles of Taxus × media contain a variety of flavonoid compounds, primarily flavones, dihydroflavones, and biflavones. Concentrations of specific flavonoids within the needle extracts are notably high, where myricetin 3-o-rutinoside constitutes 15.1% of the total flavonoids; quercetin 3-o-rutinoside accounts for 57.3% of the total flavonoids; kaempferol 3-o-rutinoside makes up 7.1% of the total flavonoids; quercetin 7-o-glucoside comprises 3.2% of the total flavonoids; and kaempferol 7-o-glucoside represents 0.7% of the total flavonoids. Additionally, concentrations of myricetin (0.5%), quercetin (0.4%), and kaempferol (0.4%) have also been detected within the extracts . 7,7”-dimethoxyagastisflavone (DMGF) is a biflavone isolated from the needles of Taxus × media . 3.1.2. Chemical Composition of Taxus × media Barks The crude extract of Taxus × media bark contains some low-polarity substances such as chlorophyll, sterols, and resins, which can be removed with low-polarity organic solvents (like petroleum ether). Analysis of specific components in the bark and root bark of Taxus × media reveals that the content of paclitaxel is relatively low in these parts . Apart from paclitaxel, the bark also contains 10-DAB, taxadiene, taxinine, and taxol B. Among the taxane compounds, paclitaxel is the most abundant, followed by 7-epi-10-deacetylbaccatin, cephalomannine, and 10-DAB, with paclitaxel content at about 0.0439%. The concentrations of paclitaxel and 10-DAB in the bark are higher than in the leaves and twigs . These compounds also have significant medicinal value and play an important role in future drug development and research. A study focusing on the chemical and protein components of the stem and bark of Taxus × media employed metabolomic and proteomic approaches. Phytochemical analysis indicated a higher concentration of paclitaxel in the phloem, and 10 critical enzymes involved in paclitaxel biosynthesis were identified, most of which are primarily produced in the phloem. Further in vitro and in vivo studies showed that TmMYB3 ( Taxus media MYB3) participates in the biosynthesis of paclitaxel by activating the expression of taxane 2α-O-benzoyltransferase (TBT) and taxadiene synthase (TS). The phloem-specific TmMYB3 is involved in the transcriptional regulation of paclitaxel biosynthesis, potentially explaining the phloem-specific accumulation of paclitaxel . 3.1.3. Chemical Composition of Taxus × media Seeds The primary chemical components in the seeds of Taxus × media are taxane compounds. A variety of taxane compounds isolated from Taxus × media seeds include paclitaxel, taxinine A, baccatin III, 9-deacetyltaxinine, 9-deacetyltaxinine E, 2-deacetyltaxinine, taxezopidine G, 2-deacetoxytaxinine J, and 2-deacetoxytaxuspine C . Seeds are also rich in flavonoid compounds, exceeding the seed coat and bark content, with substances like naringenin, aromadendrin, galanin, epigallocatechin, and gallocatechin . Polyprenols have been isolated from the Taxus × media seeds and identified using techniques such as high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing the cancer cells to undergo programmed cell death (apoptosis). Recent research continues to discover and identify new compounds in different parts of Taxus × media , enhancing our understanding of the plant’s chemical diversity and the basis of its bioactive substances. The extraction and analysis of these compounds typically employ techniques like liquid chromatography, gas chromatography, mass spectrometry, and nuclear magnetic resonance. These techniques allow for accurate identification of compound structures, analysis of their biological activity, and exploration of their medicinal potential. As the extraction and analysis technologies advance, even more new compounds are being isolated and identified from Taxus × media , further expanding our understanding of the chemical diversity of this valuable plant. Taxus × media Leaves and Twigs Taxus × media is an evergreen shrub which has abundant leaves and twigs that are easily harvested. Studies reveal significant differences in compound distribution among different tissues of Taxus × media , particularly in fresh leaves where taxane compounds are most abundant, resulting in a higher overall concentration of these components . Additionally, the leaves and twigs of Taxus × media contain other chemical components such as flavonoids, volatile oils, and inositol . The leaves and twigs of Taxus × media are rich in a variety of taxane components, including paclitaxel, cephalomannine, 10-deacetylbaccatin III (10-DAB) as the representative components . Moreover, other diterpenoids have been identified, including 7,9-deacetyltaxinine, 9-deacetyltaxinine A, d-deacetyltaxinine B, taxinine-11, 2-deacetoxytaxinine E, 2-deacetoxytaxuspine C, taxagifin, and 12-oxide . Among these, 10-DAB is the most abundant at approximately 0.75 mg/g, followed by paclitaxel, cephalomannine, and 10-deacetyl paclitaxel (10-DAT) at concentrations of approximately 0.6 mg/g, 0.5 mg/g, and 0.126 mg/g, respectively . Further studies have shown that in Taxus × media of 15 years old, the paclitaxel content in one-year-old leaves and twigs can reach up to 0.02–0.04%, and even higher 10-DAB content at 0.08–0.15% . The leaves and twigs are also rich in flavonoid compounds, including kaempferol, aromadendrin, apigenin, sciadopitysin, ginkgetin, luteolin, quercetin, amentoflavone, and others . Phenolic compounds are also abundant, containing 4-hydroxy-benzaldehyde, p-hydroxybenzoic acid, and pyrocatechol . The total flavonoid extraction from Taxus × media leaves and twigs is about 128.1 mg/g of dry weight . Polysaccharides are present in varying amounts in the leaves and twigs, ranging from 1.1781 to 3.0115%, with an average content of 2.1367% . The molecular weight of these polysaccharides is about 59.2 kilodaltons (kD), composed of approximately 365 sugar residues, with a ratio of rhamnose, arabinose, mannose, glucose, and galactose at approximately 4:6:1:1:4 . The leaves and twigs of Taxus × media are rich in volatile oils. The volatile oil content varies, with the leaf portion being higher, making it the primary source for extraction. The volatile oil extracted from Taxus × media leaves and twigs contains over 20 components, with a yield ranging from 4.21% to 9.70% . Using steam distillation and gas chromatography–mass spectrometry (GC–MS) to investigate the essential oils in the leaves and twigs of Taxus × media , 30 chemical components were identified, accounting for 97.14% of the total components. The major compounds include cis -3-hexen-1-ol, 3,3-dimethylacrylic acid, and pentenyl ethyl alcohol, constituting 29.42%, 18.13%, and 15.61%, respectively. The least abundant, trans-2-hexenol, accounts for only 0.24% . A GC–MS analysis of essential oil components in the leaves identified 34 compounds, including benzene propanenitrile, 1,4-dioxane-2,3-diol, 3-bromo-3-methyl-butyric acid, and 1-hydroxy-2-butanone . Other components present in the leaves and twigs include β-sitosterol and eicosan-10-o . β-Sitosterol content is around 1% when extracted using petroleum ether and ethyl acetate . The needles of Taxus × media , being easily regenerative and abundant in resources, yield compounds such as paclitaxel, taxol B, 10-DAB, 10β-hydroxybutyrate-10-deacetyl taxol, taxinine A, taxinine B, 5α-cinnamoyltaxicin I, and 5α-decinnamoyltaxagifine upon extraction . The highest content is 10-DAB with approximately 1.47% and paclitaxel at about 0.308% in the extracts . In addition to these compounds, the needles of Taxus × media contain a variety of flavonoid compounds, primarily flavones, dihydroflavones, and biflavones. Concentrations of specific flavonoids within the needle extracts are notably high, where myricetin 3-o-rutinoside constitutes 15.1% of the total flavonoids; quercetin 3-o-rutinoside accounts for 57.3% of the total flavonoids; kaempferol 3-o-rutinoside makes up 7.1% of the total flavonoids; quercetin 7-o-glucoside comprises 3.2% of the total flavonoids; and kaempferol 7-o-glucoside represents 0.7% of the total flavonoids. Additionally, concentrations of myricetin (0.5%), quercetin (0.4%), and kaempferol (0.4%) have also been detected within the extracts . 7,7”-dimethoxyagastisflavone (DMGF) is a biflavone isolated from the needles of Taxus × media . Taxus × media Barks The crude extract of Taxus × media bark contains some low-polarity substances such as chlorophyll, sterols, and resins, which can be removed with low-polarity organic solvents (like petroleum ether). Analysis of specific components in the bark and root bark of Taxus × media reveals that the content of paclitaxel is relatively low in these parts . Apart from paclitaxel, the bark also contains 10-DAB, taxadiene, taxinine, and taxol B. Among the taxane compounds, paclitaxel is the most abundant, followed by 7-epi-10-deacetylbaccatin, cephalomannine, and 10-DAB, with paclitaxel content at about 0.0439%. The concentrations of paclitaxel and 10-DAB in the bark are higher than in the leaves and twigs . These compounds also have significant medicinal value and play an important role in future drug development and research. A study focusing on the chemical and protein components of the stem and bark of Taxus × media employed metabolomic and proteomic approaches. Phytochemical analysis indicated a higher concentration of paclitaxel in the phloem, and 10 critical enzymes involved in paclitaxel biosynthesis were identified, most of which are primarily produced in the phloem. Further in vitro and in vivo studies showed that TmMYB3 ( Taxus media MYB3) participates in the biosynthesis of paclitaxel by activating the expression of taxane 2α-O-benzoyltransferase (TBT) and taxadiene synthase (TS). The phloem-specific TmMYB3 is involved in the transcriptional regulation of paclitaxel biosynthesis, potentially explaining the phloem-specific accumulation of paclitaxel . Taxus × media Seeds The primary chemical components in the seeds of Taxus × media are taxane compounds. A variety of taxane compounds isolated from Taxus × media seeds include paclitaxel, taxinine A, baccatin III, 9-deacetyltaxinine, 9-deacetyltaxinine E, 2-deacetyltaxinine, taxezopidine G, 2-deacetoxytaxinine J, and 2-deacetoxytaxuspine C . Seeds are also rich in flavonoid compounds, exceeding the seed coat and bark content, with substances like naringenin, aromadendrin, galanin, epigallocatechin, and gallocatechin . Polyprenols have been isolated from the Taxus × media seeds and identified using techniques such as high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing the cancer cells to undergo programmed cell death (apoptosis). Recent research continues to discover and identify new compounds in different parts of Taxus × media , enhancing our understanding of the plant’s chemical diversity and the basis of its bioactive substances. The extraction and analysis of these compounds typically employ techniques like liquid chromatography, gas chromatography, mass spectrometry, and nuclear magnetic resonance. These techniques allow for accurate identification of compound structures, analysis of their biological activity, and exploration of their medicinal potential. As the extraction and analysis technologies advance, even more new compounds are being isolated and identified from Taxus × media , further expanding our understanding of the chemical diversity of this valuable plant. The variation in the active components of Taxus × media is influenced by multiple factors, including geographic origin, growth period, soil conditions, temperature, and humidity, all of which interact to shape the metabolic profile of the plant. Specifically, geographic origin and growth period are pivotal, significantly affecting the quantity of taxane compounds. These findings highlight how variations in the content of paclitaxel and the total amount of three key taxanes—10-DAB, cephalomannine, and baccatin III—are influenced by regional differences and growth years. Cluster analysis has revealed that the paclitaxel content in the Fuzhou region of Fujian is distinct from other areas, showing a uniquely high concentration of paclitaxel. Compared to regions such as the Jiangsu Province, Shaanxi Province, Zhejiang Province, and Yunnan Province, Taxus × media in the Fuzhou region exhibits significant regional specificity in its metabolite profile. This is likely due to the region’s unique subtropical climate, which provides abundant rainfall and warmth, fostering the accumulation of this metabolite. In contrast, the total amount of the three main taxanes—specifically, 10-DAB, cephalomannine, and paclitaxel—differs substantially in the Kaifeng region of Henan, which does not follow the general trend observed in other regions. Notably, the highest levels of 10-DAB and the combined levels of these three taxanes have been found in four-year-old Taxus × media from Kaifeng, Henan. In this case, 10-DAB levels were 3.83 times higher than in the Fujian region, cephalomannine levels in Lishui, Zhejiang, were 4.59 times higher than in Liaoning Benxi, and paclitaxel levels in Lishui, Zhejiang, were 2.17 times higher than in Liaoning Benxi. These variations are statistically significant ( p < 0.01), emphasizing the pronounced impact of geographic origin and growth period on the biosynthesis of these crucial chemotherapeutic agents . Moreover, Taxus × media can be successfully cultivated across most parts of China, with Paclitaxel content in some introduced regions even surpassing that of its original habitat . Further studies reveal distinct concentrations of key taxane compounds when comparing the same growth durations across different regions, specifically Hainan and Sichuan. For three-year-old twigs, Hainan consistently shows higher concentrations of paclitaxel compared to Sichuan, while the 10-DAB levels are generally higher in Sichuan across multiple growth years. Cephalomannine content varies, with no consistent pattern between the two regions across the growth periods analyzed ( p > 0.05). The distinct biochemical profiles of Taxus × media from these regions reflect the influence of local environmental conditions such as climate, soil type, and altitude. Hainan, characterized by its tropical monsoon climate, offers a unique environment with abundant rainfall and a longer growing season that potentially enhances the biosynthesis of certain taxanes. This environment contrasts with that of Sichuan, where the varying altitude and distinct seasonal changes might affect plant metabolism differently. These regional differences are crucial in understanding the variations in metabolite synthesis within Taxus × media , and provide essential insights into the optimal cultivation practices and harvest timings for maximizing the yield of these valuable compounds . A comparative analysis of metabolites from different Taxus species identified 2246 metabolites. The study revealed significant differences in metabolites and compounds among various Taxus species. The highest contents of paclitaxel and cephalomannine are in Taxus cuspidata , followed by Taxus × media ; however, the content of 10-DAB is lowest in Taxus × media at less than half of the other species. The highest content of baccatin III is in Taxus yunnanensis , slightly lower in Taxus mairei , while the 10-DAT content is generally low. The total content of the five taxane compounds is highest in Taxus chinesensis , followed by Taxus × media , Taxus yunnanensis , and Taxus mairei . The average content of cephalomannine in the leaves of Taxus wallichiana is higher than in the stem, contrary to Taxus × media ; in both species, the leaf content of paclitaxel is higher than in the stem, yet the stem and leaf content of paclitaxel in Taxus × media is not lower than in Taxus wallichiana . Paclitaxel, as the most well-known component, is an effective anticancer drug, particularly significant in treating ovarian and breast cancer. Different parts of Taxus × media are extensively used for extracting paclitaxel . Compared to other yew species, Taxus × media primarily excels in its significant paclitaxel content . According to a report by the Sichuan Academy of Forestry Sciences, the average paclitaxel content in Taxus × media is 0.0385%, which is 7.3 times that of Taxus brevifolia , 4.6 times that of Taxus yunnanensis , and 4.1 times that of Taxus mairei . In a study comparing five domestic yew species with Taxus × media , the branch and leaf paclitaxel content of Taxus yunnanensis was found to be 0.0100%, while in Taxus × media , it was 0.0130% . Taxus × media is a plant with significant and diverse pharmacological activities, containing various compounds with distinct pharmacological properties. We will discuss its anticancer, antibacterial, anti-diabetic, anti-inflammatory, and antioxidant effects in turn. shows the corresponding pharmacological activities of main compounds of Taxus × media . 4.1. Anticancer Activity 4.1.1. Anticancer Activity of Monomeric Compounds Taxane Compounds Paclitaxel, the principal antitumor component of Taxus × media , is a diterpenoid compound. Due to its concentration in the bark and leaves, it has been extensively harvested and used in various traditional medical systems to treat many diseases . Paclitaxel inhibits tumor cell mitosis and proliferation by promoting microtubule stabilization. This mechanism of action has led to its widespread use in the treatment of various types of cancer, including ovarian, breast, Kaposi’s sarcoma, and lung cancer . Since its FDA approval in 1992, paclitaxel has been recognized globally as an anticancer drug . Among the many taxane compounds with anticancer activity, paclitaxel is the most potent, with an IC50 value of 2.5–7.5 nM . In vitro, paclitaxel has shown significant growth inhibition against transplantable tumors P388, L1210, P1534 leukemia cells, and human ovarian cancer cells, while the in vivo experiments have demonstrated its potent activity against B16 melanoma and MX1 breast cancer and activity against LX-1 lung cancer, CX-1 colon cancer, P388 leukemia, Lewis lung cancer, and Sarcoma S180 . A series of hydroxylation steps of the taxane core is essential for functionalized taxane compounds, primarily mediated by cytochrome P450 enzyme-mediated oxidation. Paclitaxel is produced mainly through semi-synthesis from precursors (such as baccatin III) more readily obtained from various yew species . Recent research on Taxus × media cell culture has revealed the potential of coronatine (COR) and coronafacic acid (CAL) to increase the production of taxanes, especially paclitaxel. The study showed that the combined use of COR and CAL significantly increased the total yield of paclitaxel and promoted its excretion into the culture medium, indicating the bi-sustainability and economic feasibility of this method in enhancing paclitaxel production . Cephalomannine is primarily derived from the twigs and leaves of Taxus × media , where it is present in high concentrations and exhibits notable activity. Cephalomannine, found in high concentrations in the twigs and leaves of Taxus × media , shows strong anticancer potential, particularly against breast cancer MCF-7 cells, with a dose-dependent IC50 value of 0.86 µg/m . It also effectively inhibits P388 lymphocytic leukemia, highlighting its broad-spectrum anticancer activity . Cephalomannine was found to effectively inhibit the progression of bladder cancer in various experimental models, including cultured cell lines, organoids, and an in vivo model of lymphatic metastasis. Importantly, this inhibition occurred with no significant reported toxicity. These results suggest that cephalomannine, through its impact on UBE2S, holds promise as a treatment for bladder cancer, particularly in cases prone to lymphatic metastasis. This could lead to new clinical approaches for managing bladder cancer, especially for patients at high risk of metastatic disease . In a previous study, male BALB/C nude mice were used as the animal model, with human lung cancer H460 cells subcutaneously implanted to simulate lung cancer. The drug used was cephalomannine, administered at a dosage of 0.4 mg/kg via intraperitoneal injection. The results indicated that cephalomannine significantly reduced tumor volume and weight. No significant loss was detected in the body and organ weights of the experimental animals, suggesting that cephalomannine significantly suppressed the growth of lung cancer cell xenografts but had no major side effects in the mice . Taxinine, primarily found in the twigs and leaves of Taxus × media , contains various derivatives. Taxinine A exhibits cytotoxic effects on breast cancer, colon cancer, and oral squamous carcinoma cells . With an IC50 value of 5.336 µg/mL, it significantly reduces MCF-7 cell proliferation after 72 h, demonstrating both time and dose dependency, albeit less potent than paclitaxel . Although cephalomannine and taxinine have shown potential anticancer effects, these compounds have not yet been officially registered by the FDA or any other drug regulatory agency. 10-DAB, an effective anticancer compound, has significantly inhibited various cancer cell lines . This class of compounds can inhibit the proliferation of many cancer cell lines and exert antitumor effects by inhibiting bone marrow-derived suppressor cells’ accumulation and suppressive function . 10-DAB-treated MCF-7 cells, at a concentration of 5.446 µg/mL for 24 h, significantly inhibited cell proliferation with an inhibition rate of 44.8%. After 72 h of treatment, the inhibition rate was increased to 49.6% . A study explored the effects of 10-DAB on tumor growth in mice infected with the Moloney murine sarcoma virus. The research utilized male NMRI mice, which were injected intramuscularly with the virus to induce tumor growth. Following the infection, the mice were treated intraperitoneally with 100 µg of 10-DAB on the first three days post infection. The results showed that while 10-DAB did not prevent tumor formation, it significantly reduced the size of the tumors compared to the control group. Specifically, 10-DAB reduced the mean diameter of the tumors by 35% compared to the control group, demonstrating its potential antitumor activity . In experiments with female CDF1 mice, IDN 5390, a derivative of 10-DAB, was tested both orally and intravenously to assess its pharmacokinetic properties. Administered in doses ranging from 60 to 120 mg/kg, IDN 5390 demonstrated strong oral bioavailability at 43% for the lowest dose, though this decreased with higher doses. The drug was quickly absorbed, showing peak plasma concentrations within 15 to 30 min, and was widely distributed in vital organs like the liver, kidneys, and heart. Interestingly, its concentration in the brain remained elevated longer than in other tissues, suggesting potential utility in treating brain tumors . Daily treatment with IDN 5390 in mice bearing established lung micrometastases from the B16BL6 murine melanoma caused a reduction in the size of metastases . Baccatin III derivatives are precursors for the semi-synthesis of paclitaxel. Baccatin III has been found to have inhibitory effects on bleomycin A5-induced rat pulmonary fibrosis and can be used in tumor chemotherapy . Additionally, it can synthesize Taxotere, a compound with higher anticancer activity. B16 melanoma shows high sensitivity to Taxotere . Taxotere has shown positive effects in treating early-stage pancreatic ductal adenocarcinoma and colon adenocarcinoma, achieving multiple cures in early pancreatic ductal adenocarcinoma and colon adenocarcinoma. Taxotere also achieved an over 80% complete remission rate in the advanced stages of these two tumors . In the in vitro experiments, oral administration of baccatin III significantly reduced the growth of tumors induced by engrafting BALB/c mice with either four T1 mammary carcinoma or CT26 colon cancer cells, and baccatin III decreased the accumulation of MDSCs in the spleens of the tumor-bearing mice . Furthermore, 7-Xylosyl-10-deacetyltaxol shows effective inhibitory action on various tumor cell lines, with an IC50 value of 0.3776 µg/mL against the breast cancer cell line MCF-7 and an IC50 value of 0.86 µg/mL against the colon cancer cell lines, indicating its high efficacy against these cancer cell lines . At present, the only taxanes are paclitaxel and the semi-synthetic drug taxotere, with baccatin III or 10-DAB as their precursor. Paclitaxel and taxotere have a wide antibacterial spectrum and are effective against a variety of drug-resistant tumor cell lines. They are mainly used in the single-drug treatment of ovarian cancer, breast cancer, small cell and non-small cell lung cancer, neck cancer, and also have significant effects on esophageal cancer, nasopharyngeal cancer, bladder cancer, lymphatic cancer, prostate cancer, malignant melanoma cancer, and gastrointestinal cancer . Other Non-Taxane Anticancer Components In addition to taxane compounds like paclitaxel, Taxus × media contains other substances with anticancer activity, such as flavonoid compounds. For instance, Apigenin exhibits significant antitumor activity . It works through multiple mechanisms, including inducing apoptosis, regulating the cell cycle, and inhibiting cancer cell migration and invasion. Apigenin has been shown to interact with several cellular signaling pathways, such as PI3K/AKT/mTOR and MAPK/ERK , which are crucial in cancer treatment. Further research has also revealed the potential of 7,7”-dimethoxyagastisflavone (DMGF), extracted from Taxus × media cv. Hicksii, in inhibiting cancer cell proliferation. DMGF can induce apoptosis and autophagy in cancer cells and has been shown to inhibit B16F10 cell mobility in trans-well assays. Real-time PCR results indicate that DMGF also reduces the expression of matrix metalloproteinase-2 (MMP-2) and decreases vascular density of tumors in vivo . Its anti-metastatic effect partly originates from the downregulation of the Cdc42/Rac1 pathway, affecting F-actin aggregation and reducing CREB phosphorylation, inhibiting pseudopodia formation . Additionally, polyprenols can be isolated from the seeds of Taxus × media . Polyprenols are known for their various pharmacological activities, chiefly their anticancer properties. They are identified using high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing cancer cells to undergo programmed cell death (apoptosis). 4.1.2. Anticancer Activity of Extracts The extracts of Taxus × media have been extensively studied and found to contain a diverse array of biologically active compounds. These include not only well-known molecules like paclitaxel but also a variety of flavonoids, terpenes, organic acids, and amino acids, all contributing to the extract’s pharmacological profile. Flavonoids and terpenes have been spotlighted for their potent anticancer properties. Recent studies using advanced analytical techniques, such as gas chromatography–mass spectrometry (GC–MS) and high-performance liquid chromatography (HPLC), have provided deeper insights into the complex composition of these extracts. These analyses reveal the presence of multiple active components that may work in concert to exert anticancer effects, suggesting a potential synergistic interaction among these compounds. A study using gas chromatography–mass spectrometry (GC–MS) analyzed and identified compounds in the leaves of Taxus × media , exploring their potential biological activities. The study identified 20 compounds with significant bioactivity, mainly flavonoids, terpenes, organic acids, amino acids and their derivatives, and alcohols. Tests showed that the extracts of Taxus × media displayed notable anticancer properties . The observed pharmacological activities are not solely attributable to the action of paclitaxel. While paclitaxel plays a significant role due to its well-documented anticancer efficacy, the contribution of other compounds such as the flavonoids and terpenes may enhance or complement the anticancer activity through various mechanisms . For instance, some flavonoids have been shown to induce apoptosis and inhibit angiogenesis in tumor cells, while terpenes might disrupt cellular processes critical for cancer cell survival and proliferation . The suggestion of a synergistic effect is particularly intriguing and warrants further investigation. Preliminary in vitro studies indicate that these extracts can inhibit the growth of various cancer cell lines more effectively than would be expected from the activity of paclitaxel alone. This synergy could be due to multiple compounds targeting different pathways involved in cancer progression, thereby increasing the overall anticancer efficacy of the extract . Given these promising findings, it is crucial to pursue further clinical studies to explore the therapeutic potential of these extracts. Such studies could provide vital data on the efficacy and safety of the extracts, paving the way for their potential use as comprehensive anticancer treatments. These clinical investigations are essential to validate the anticancer activities observed in preclinical models and to assess the therapeutic viability of using Taxus × media extracts in oncology. 4.2. Antibacterial Activity Taxus × media also demonstrates significant activity in combating microbial pathogens. Zhang et al. tested the antibacterial properties of essential oils extracted from fresh leaves of Taxus × media . They used the paper disc agar diffusion method and the minimum inhibitory concentration method to assess its inhibitory effects on microbes. The results showed that the essential oil of Taxus × media significantly inhibits and kills microbes such as Staphylococcus aureus and Escherichia coli . This antibacterial activity is due to the combined effects of various compounds in the essential oil. The main components in the essential oil of Taxus × media include cis -3-hexen-1-ol and pentenyl ethyl alcohol, which are considered to play an essential protective role during the plant’s growth and effectively inhibit bacterial proliferation. Additionally, compounds like benzaldehyde have been proven to have sound inhibitory effects on various microbes. Compared to the volatile oil from the leaves of Taxus mairei , the volatile oil from Taxus × media leaves show more robust antibacterial characteristics. Dar and others have extensively explored the antibacterial activity of ten different solvent extracts from the leaves against various bacteria, such as B. pumilis , Staphylococcus aureus , Pseudomonas aeruginosa , and Escherichia coli , with significant results . Recent research has revealed the potential of endophytic fungi in medical development, especially in producing biologically active compounds. For instance, endophytic fungi isolated from Taxus × media , including Graminicolous helminthosporium , Bipolaris australiensis , and Cladosporium cladosporioides , were found to produce various bioactive compounds. These include anthraquinones, barbiturates, benzopyrroles, ethyl quinolines, etc. These endophytic fungi showed significant antifungal effects. Evaluating healthy diffusion methods for agar revealed vigorous antifungal activities in both intracellular and extracellular extracts from these fungi. Notably, endophytic fungi from Fujian Province, China, exhibited significant inhibitory capabilities against pathogenic fungi like Neurospora sp., Trichoderma sp., and Fusarium sp. Among them, fungi from the Paecilomyces sp. showed the highest positive rate of antifungal activity . These substances are effective against various pathogens, including those that have developed resistance to antibiotics . 4.3. Anti-Diabetic Activity The extracts of Taxus × media have shown potential in treating diabetes and its related complications in recent research. Taxus × media demonstrates significant anti-hyperglycemic activity . A study using C57BL/6 mice fed with a high-fat diet as a model investigated the effect of Taxus × media extract on insulin resistance. The results indicated that the ethyl acetate extract (Tw-EA) of Taxus × media significantly reduced blood glucose levels, decreased the production of inflammatory cytokines, and reduced weight gain. This suggests that Taxus × media has therapeutic effects against inflammation-induced insulin resistance. The study also found that Tw-EA treatment reduced lipid accumulation in adipocytes and decreased the infiltration of inflammatory cells in skeletal muscle and adipose tissue, thereby improving the insulin resistance status . Taxus × media contains paclitaxel, which is involved in some extraction and separation processes, and detection methods, and shows specific glucose-lowering effects. Its mechanism of action is different from the currently marketed diabetes treatments. It can restore the damaged pancreatic islet system in diabetic patients, potentially becoming a new direction in developing diabetes treatment drugs . Dai et al. conducted a study on the glucose-lowering effects of its extract, Sequoyitol . By establishing a type 2 diabetes rat model, it was found that Sequoyitol significantly reduced blood glucose levels in rats. Sequoyitol can significantly inhibit α-glucosidase activity competitively and promote glucose uptake in adipocytes, exerting a blood glucose-lowering effect comparable to 20 mg/kg of Acarbose. A further radioimmunoassay showed that Sequoyitol could reduce the insulin resistance index in rats and promote insulin secretion. p66shcA is a vital antioxidant protein , and RT-PCR studies found that Sequoyitol significantly reduced the expression of rat p66shcA mRNA even at low doses. Immunohistochemistry showed that specific doses of Sequoyitol significantly reduced the expression and phosphorylation levels of the p66shcA protein in rat thoracic aortas. Colorimetric results showed that Sequoyitol reduced the content of malondialdehyde in rat plasma. DHE staining further proved that Sequoyitol significantly inhibited the production of reactive oxygen free radicals in rat aortas , suggesting that Sequoyitol may benefit diabetic cardiovascular complications. The research found that the fruit of Taxus × media exhibits specific anti-hyperglycemic activity. The hypoglycemic effect can be attributed to various bioactive compounds, including flavonoids, which may interact with metabolic pathways related to glucose metabolism. These interactions include affecting insulin secretion, enhancing glucose uptake in peripheral tissues, inhibiting carbohydrate-digesting enzymes, or mimicking insulin action . 4.4. Anti-Inflammatory Activity Significant findings have been made regarding the anti-inflammatory activity of Taxus × media . The active component baccatin III in Taxus × media , known for its antitumor activity, also effectively inhibits rat pulmonary fibrosis induced by BLM. It can alleviate alveolar inflammation and the extent of pulmonary fibrosis in rats ( p < 0.01) and reduce the expression of ERK1. Its mechanism of action is related to improving the abnormal deposition of the extracellular matrix and inhibiting excessive repair of lung injury tissues . A study investigated the analgesic and anti-inflammatory activities of several compounds isolated from the bark extract, including Tasumatrol B, 1,13-Diacetyl-10-deacetylbaccatin (10-DAD), and 4-Deacetylbaccatin (4-DAB). Four hours post-administration, the 95% ethanol extract showed effective anti-inflammatory activity at 200 mg/kg concentration compared to the ether extract and the reference standard aspirin . These compounds were evaluated for their potential effects on analgesia and anti-inflammation, providing further scientific evidence for the traditional medicinal use of Taxus species. Another study explored the anti-inflammatory action of Taxusabietane A extracted from Taxus × media . This study indicated that Taxusabietane A has significant anti-inflammatory activity, which aligns with its use in folk medicine for treating inflammation-related diseases. Taxusabietane A showed vigorous LOX inhibitory activity, with an IC50 value of 57 ± 0.31, and exhibited significant anti-inflammatory effects at 5 and 10 mg/kg . 4.5. Antioxidant Activity Li et al. conducted DPPH free radical scavenging experiments on the total flavonoid extract from the twigs and leaves of Taxus × media . Comparing the scavenging rates of various concentration gradients of the extract with Vitamin C on DPPH free radicals, it was found that its antioxidant activity increases with the rise in mass concentration. The scavenging rates for DPPH free radicals, ABTS + free radicals, and nitrite were 91.04%, 99.17%, and 65.50%, respectively, indicating that the total flavonoids of Taxus × media twigs and leaves possess strong in vitro antioxidant capabilities. 4.1.1. Anticancer Activity of Monomeric Compounds Taxane Compounds Paclitaxel, the principal antitumor component of Taxus × media , is a diterpenoid compound. Due to its concentration in the bark and leaves, it has been extensively harvested and used in various traditional medical systems to treat many diseases . Paclitaxel inhibits tumor cell mitosis and proliferation by promoting microtubule stabilization. This mechanism of action has led to its widespread use in the treatment of various types of cancer, including ovarian, breast, Kaposi’s sarcoma, and lung cancer . Since its FDA approval in 1992, paclitaxel has been recognized globally as an anticancer drug . Among the many taxane compounds with anticancer activity, paclitaxel is the most potent, with an IC50 value of 2.5–7.5 nM . In vitro, paclitaxel has shown significant growth inhibition against transplantable tumors P388, L1210, P1534 leukemia cells, and human ovarian cancer cells, while the in vivo experiments have demonstrated its potent activity against B16 melanoma and MX1 breast cancer and activity against LX-1 lung cancer, CX-1 colon cancer, P388 leukemia, Lewis lung cancer, and Sarcoma S180 . A series of hydroxylation steps of the taxane core is essential for functionalized taxane compounds, primarily mediated by cytochrome P450 enzyme-mediated oxidation. Paclitaxel is produced mainly through semi-synthesis from precursors (such as baccatin III) more readily obtained from various yew species . Recent research on Taxus × media cell culture has revealed the potential of coronatine (COR) and coronafacic acid (CAL) to increase the production of taxanes, especially paclitaxel. The study showed that the combined use of COR and CAL significantly increased the total yield of paclitaxel and promoted its excretion into the culture medium, indicating the bi-sustainability and economic feasibility of this method in enhancing paclitaxel production . Cephalomannine is primarily derived from the twigs and leaves of Taxus × media , where it is present in high concentrations and exhibits notable activity. Cephalomannine, found in high concentrations in the twigs and leaves of Taxus × media , shows strong anticancer potential, particularly against breast cancer MCF-7 cells, with a dose-dependent IC50 value of 0.86 µg/m . It also effectively inhibits P388 lymphocytic leukemia, highlighting its broad-spectrum anticancer activity . Cephalomannine was found to effectively inhibit the progression of bladder cancer in various experimental models, including cultured cell lines, organoids, and an in vivo model of lymphatic metastasis. Importantly, this inhibition occurred with no significant reported toxicity. These results suggest that cephalomannine, through its impact on UBE2S, holds promise as a treatment for bladder cancer, particularly in cases prone to lymphatic metastasis. This could lead to new clinical approaches for managing bladder cancer, especially for patients at high risk of metastatic disease . In a previous study, male BALB/C nude mice were used as the animal model, with human lung cancer H460 cells subcutaneously implanted to simulate lung cancer. The drug used was cephalomannine, administered at a dosage of 0.4 mg/kg via intraperitoneal injection. The results indicated that cephalomannine significantly reduced tumor volume and weight. No significant loss was detected in the body and organ weights of the experimental animals, suggesting that cephalomannine significantly suppressed the growth of lung cancer cell xenografts but had no major side effects in the mice . Taxinine, primarily found in the twigs and leaves of Taxus × media , contains various derivatives. Taxinine A exhibits cytotoxic effects on breast cancer, colon cancer, and oral squamous carcinoma cells . With an IC50 value of 5.336 µg/mL, it significantly reduces MCF-7 cell proliferation after 72 h, demonstrating both time and dose dependency, albeit less potent than paclitaxel . Although cephalomannine and taxinine have shown potential anticancer effects, these compounds have not yet been officially registered by the FDA or any other drug regulatory agency. 10-DAB, an effective anticancer compound, has significantly inhibited various cancer cell lines . This class of compounds can inhibit the proliferation of many cancer cell lines and exert antitumor effects by inhibiting bone marrow-derived suppressor cells’ accumulation and suppressive function . 10-DAB-treated MCF-7 cells, at a concentration of 5.446 µg/mL for 24 h, significantly inhibited cell proliferation with an inhibition rate of 44.8%. After 72 h of treatment, the inhibition rate was increased to 49.6% . A study explored the effects of 10-DAB on tumor growth in mice infected with the Moloney murine sarcoma virus. The research utilized male NMRI mice, which were injected intramuscularly with the virus to induce tumor growth. Following the infection, the mice were treated intraperitoneally with 100 µg of 10-DAB on the first three days post infection. The results showed that while 10-DAB did not prevent tumor formation, it significantly reduced the size of the tumors compared to the control group. Specifically, 10-DAB reduced the mean diameter of the tumors by 35% compared to the control group, demonstrating its potential antitumor activity . In experiments with female CDF1 mice, IDN 5390, a derivative of 10-DAB, was tested both orally and intravenously to assess its pharmacokinetic properties. Administered in doses ranging from 60 to 120 mg/kg, IDN 5390 demonstrated strong oral bioavailability at 43% for the lowest dose, though this decreased with higher doses. The drug was quickly absorbed, showing peak plasma concentrations within 15 to 30 min, and was widely distributed in vital organs like the liver, kidneys, and heart. Interestingly, its concentration in the brain remained elevated longer than in other tissues, suggesting potential utility in treating brain tumors . Daily treatment with IDN 5390 in mice bearing established lung micrometastases from the B16BL6 murine melanoma caused a reduction in the size of metastases . Baccatin III derivatives are precursors for the semi-synthesis of paclitaxel. Baccatin III has been found to have inhibitory effects on bleomycin A5-induced rat pulmonary fibrosis and can be used in tumor chemotherapy . Additionally, it can synthesize Taxotere, a compound with higher anticancer activity. B16 melanoma shows high sensitivity to Taxotere . Taxotere has shown positive effects in treating early-stage pancreatic ductal adenocarcinoma and colon adenocarcinoma, achieving multiple cures in early pancreatic ductal adenocarcinoma and colon adenocarcinoma. Taxotere also achieved an over 80% complete remission rate in the advanced stages of these two tumors . In the in vitro experiments, oral administration of baccatin III significantly reduced the growth of tumors induced by engrafting BALB/c mice with either four T1 mammary carcinoma or CT26 colon cancer cells, and baccatin III decreased the accumulation of MDSCs in the spleens of the tumor-bearing mice . Furthermore, 7-Xylosyl-10-deacetyltaxol shows effective inhibitory action on various tumor cell lines, with an IC50 value of 0.3776 µg/mL against the breast cancer cell line MCF-7 and an IC50 value of 0.86 µg/mL against the colon cancer cell lines, indicating its high efficacy against these cancer cell lines . At present, the only taxanes are paclitaxel and the semi-synthetic drug taxotere, with baccatin III or 10-DAB as their precursor. Paclitaxel and taxotere have a wide antibacterial spectrum and are effective against a variety of drug-resistant tumor cell lines. They are mainly used in the single-drug treatment of ovarian cancer, breast cancer, small cell and non-small cell lung cancer, neck cancer, and also have significant effects on esophageal cancer, nasopharyngeal cancer, bladder cancer, lymphatic cancer, prostate cancer, malignant melanoma cancer, and gastrointestinal cancer . Other Non-Taxane Anticancer Components In addition to taxane compounds like paclitaxel, Taxus × media contains other substances with anticancer activity, such as flavonoid compounds. For instance, Apigenin exhibits significant antitumor activity . It works through multiple mechanisms, including inducing apoptosis, regulating the cell cycle, and inhibiting cancer cell migration and invasion. Apigenin has been shown to interact with several cellular signaling pathways, such as PI3K/AKT/mTOR and MAPK/ERK , which are crucial in cancer treatment. Further research has also revealed the potential of 7,7”-dimethoxyagastisflavone (DMGF), extracted from Taxus × media cv. Hicksii, in inhibiting cancer cell proliferation. DMGF can induce apoptosis and autophagy in cancer cells and has been shown to inhibit B16F10 cell mobility in trans-well assays. Real-time PCR results indicate that DMGF also reduces the expression of matrix metalloproteinase-2 (MMP-2) and decreases vascular density of tumors in vivo . Its anti-metastatic effect partly originates from the downregulation of the Cdc42/Rac1 pathway, affecting F-actin aggregation and reducing CREB phosphorylation, inhibiting pseudopodia formation . Additionally, polyprenols can be isolated from the seeds of Taxus × media . Polyprenols are known for their various pharmacological activities, chiefly their anticancer properties. They are identified using high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing cancer cells to undergo programmed cell death (apoptosis). 4.1.2. Anticancer Activity of Extracts The extracts of Taxus × media have been extensively studied and found to contain a diverse array of biologically active compounds. These include not only well-known molecules like paclitaxel but also a variety of flavonoids, terpenes, organic acids, and amino acids, all contributing to the extract’s pharmacological profile. Flavonoids and terpenes have been spotlighted for their potent anticancer properties. Recent studies using advanced analytical techniques, such as gas chromatography–mass spectrometry (GC–MS) and high-performance liquid chromatography (HPLC), have provided deeper insights into the complex composition of these extracts. These analyses reveal the presence of multiple active components that may work in concert to exert anticancer effects, suggesting a potential synergistic interaction among these compounds. A study using gas chromatography–mass spectrometry (GC–MS) analyzed and identified compounds in the leaves of Taxus × media , exploring their potential biological activities. The study identified 20 compounds with significant bioactivity, mainly flavonoids, terpenes, organic acids, amino acids and their derivatives, and alcohols. Tests showed that the extracts of Taxus × media displayed notable anticancer properties . The observed pharmacological activities are not solely attributable to the action of paclitaxel. While paclitaxel plays a significant role due to its well-documented anticancer efficacy, the contribution of other compounds such as the flavonoids and terpenes may enhance or complement the anticancer activity through various mechanisms . For instance, some flavonoids have been shown to induce apoptosis and inhibit angiogenesis in tumor cells, while terpenes might disrupt cellular processes critical for cancer cell survival and proliferation . The suggestion of a synergistic effect is particularly intriguing and warrants further investigation. Preliminary in vitro studies indicate that these extracts can inhibit the growth of various cancer cell lines more effectively than would be expected from the activity of paclitaxel alone. This synergy could be due to multiple compounds targeting different pathways involved in cancer progression, thereby increasing the overall anticancer efficacy of the extract . Given these promising findings, it is crucial to pursue further clinical studies to explore the therapeutic potential of these extracts. Such studies could provide vital data on the efficacy and safety of the extracts, paving the way for their potential use as comprehensive anticancer treatments. These clinical investigations are essential to validate the anticancer activities observed in preclinical models and to assess the therapeutic viability of using Taxus × media extracts in oncology. Taxane Compounds Paclitaxel, the principal antitumor component of Taxus × media , is a diterpenoid compound. Due to its concentration in the bark and leaves, it has been extensively harvested and used in various traditional medical systems to treat many diseases . Paclitaxel inhibits tumor cell mitosis and proliferation by promoting microtubule stabilization. This mechanism of action has led to its widespread use in the treatment of various types of cancer, including ovarian, breast, Kaposi’s sarcoma, and lung cancer . Since its FDA approval in 1992, paclitaxel has been recognized globally as an anticancer drug . Among the many taxane compounds with anticancer activity, paclitaxel is the most potent, with an IC50 value of 2.5–7.5 nM . In vitro, paclitaxel has shown significant growth inhibition against transplantable tumors P388, L1210, P1534 leukemia cells, and human ovarian cancer cells, while the in vivo experiments have demonstrated its potent activity against B16 melanoma and MX1 breast cancer and activity against LX-1 lung cancer, CX-1 colon cancer, P388 leukemia, Lewis lung cancer, and Sarcoma S180 . A series of hydroxylation steps of the taxane core is essential for functionalized taxane compounds, primarily mediated by cytochrome P450 enzyme-mediated oxidation. Paclitaxel is produced mainly through semi-synthesis from precursors (such as baccatin III) more readily obtained from various yew species . Recent research on Taxus × media cell culture has revealed the potential of coronatine (COR) and coronafacic acid (CAL) to increase the production of taxanes, especially paclitaxel. The study showed that the combined use of COR and CAL significantly increased the total yield of paclitaxel and promoted its excretion into the culture medium, indicating the bi-sustainability and economic feasibility of this method in enhancing paclitaxel production . Cephalomannine is primarily derived from the twigs and leaves of Taxus × media , where it is present in high concentrations and exhibits notable activity. Cephalomannine, found in high concentrations in the twigs and leaves of Taxus × media , shows strong anticancer potential, particularly against breast cancer MCF-7 cells, with a dose-dependent IC50 value of 0.86 µg/m . It also effectively inhibits P388 lymphocytic leukemia, highlighting its broad-spectrum anticancer activity . Cephalomannine was found to effectively inhibit the progression of bladder cancer in various experimental models, including cultured cell lines, organoids, and an in vivo model of lymphatic metastasis. Importantly, this inhibition occurred with no significant reported toxicity. These results suggest that cephalomannine, through its impact on UBE2S, holds promise as a treatment for bladder cancer, particularly in cases prone to lymphatic metastasis. This could lead to new clinical approaches for managing bladder cancer, especially for patients at high risk of metastatic disease . In a previous study, male BALB/C nude mice were used as the animal model, with human lung cancer H460 cells subcutaneously implanted to simulate lung cancer. The drug used was cephalomannine, administered at a dosage of 0.4 mg/kg via intraperitoneal injection. The results indicated that cephalomannine significantly reduced tumor volume and weight. No significant loss was detected in the body and organ weights of the experimental animals, suggesting that cephalomannine significantly suppressed the growth of lung cancer cell xenografts but had no major side effects in the mice . Taxinine, primarily found in the twigs and leaves of Taxus × media , contains various derivatives. Taxinine A exhibits cytotoxic effects on breast cancer, colon cancer, and oral squamous carcinoma cells . With an IC50 value of 5.336 µg/mL, it significantly reduces MCF-7 cell proliferation after 72 h, demonstrating both time and dose dependency, albeit less potent than paclitaxel . Although cephalomannine and taxinine have shown potential anticancer effects, these compounds have not yet been officially registered by the FDA or any other drug regulatory agency. 10-DAB, an effective anticancer compound, has significantly inhibited various cancer cell lines . This class of compounds can inhibit the proliferation of many cancer cell lines and exert antitumor effects by inhibiting bone marrow-derived suppressor cells’ accumulation and suppressive function . 10-DAB-treated MCF-7 cells, at a concentration of 5.446 µg/mL for 24 h, significantly inhibited cell proliferation with an inhibition rate of 44.8%. After 72 h of treatment, the inhibition rate was increased to 49.6% . A study explored the effects of 10-DAB on tumor growth in mice infected with the Moloney murine sarcoma virus. The research utilized male NMRI mice, which were injected intramuscularly with the virus to induce tumor growth. Following the infection, the mice were treated intraperitoneally with 100 µg of 10-DAB on the first three days post infection. The results showed that while 10-DAB did not prevent tumor formation, it significantly reduced the size of the tumors compared to the control group. Specifically, 10-DAB reduced the mean diameter of the tumors by 35% compared to the control group, demonstrating its potential antitumor activity . In experiments with female CDF1 mice, IDN 5390, a derivative of 10-DAB, was tested both orally and intravenously to assess its pharmacokinetic properties. Administered in doses ranging from 60 to 120 mg/kg, IDN 5390 demonstrated strong oral bioavailability at 43% for the lowest dose, though this decreased with higher doses. The drug was quickly absorbed, showing peak plasma concentrations within 15 to 30 min, and was widely distributed in vital organs like the liver, kidneys, and heart. Interestingly, its concentration in the brain remained elevated longer than in other tissues, suggesting potential utility in treating brain tumors . Daily treatment with IDN 5390 in mice bearing established lung micrometastases from the B16BL6 murine melanoma caused a reduction in the size of metastases . Baccatin III derivatives are precursors for the semi-synthesis of paclitaxel. Baccatin III has been found to have inhibitory effects on bleomycin A5-induced rat pulmonary fibrosis and can be used in tumor chemotherapy . Additionally, it can synthesize Taxotere, a compound with higher anticancer activity. B16 melanoma shows high sensitivity to Taxotere . Taxotere has shown positive effects in treating early-stage pancreatic ductal adenocarcinoma and colon adenocarcinoma, achieving multiple cures in early pancreatic ductal adenocarcinoma and colon adenocarcinoma. Taxotere also achieved an over 80% complete remission rate in the advanced stages of these two tumors . In the in vitro experiments, oral administration of baccatin III significantly reduced the growth of tumors induced by engrafting BALB/c mice with either four T1 mammary carcinoma or CT26 colon cancer cells, and baccatin III decreased the accumulation of MDSCs in the spleens of the tumor-bearing mice . Furthermore, 7-Xylosyl-10-deacetyltaxol shows effective inhibitory action on various tumor cell lines, with an IC50 value of 0.3776 µg/mL against the breast cancer cell line MCF-7 and an IC50 value of 0.86 µg/mL against the colon cancer cell lines, indicating its high efficacy against these cancer cell lines . At present, the only taxanes are paclitaxel and the semi-synthetic drug taxotere, with baccatin III or 10-DAB as their precursor. Paclitaxel and taxotere have a wide antibacterial spectrum and are effective against a variety of drug-resistant tumor cell lines. They are mainly used in the single-drug treatment of ovarian cancer, breast cancer, small cell and non-small cell lung cancer, neck cancer, and also have significant effects on esophageal cancer, nasopharyngeal cancer, bladder cancer, lymphatic cancer, prostate cancer, malignant melanoma cancer, and gastrointestinal cancer . Other Non-Taxane Anticancer Components In addition to taxane compounds like paclitaxel, Taxus × media contains other substances with anticancer activity, such as flavonoid compounds. For instance, Apigenin exhibits significant antitumor activity . It works through multiple mechanisms, including inducing apoptosis, regulating the cell cycle, and inhibiting cancer cell migration and invasion. Apigenin has been shown to interact with several cellular signaling pathways, such as PI3K/AKT/mTOR and MAPK/ERK , which are crucial in cancer treatment. Further research has also revealed the potential of 7,7”-dimethoxyagastisflavone (DMGF), extracted from Taxus × media cv. Hicksii, in inhibiting cancer cell proliferation. DMGF can induce apoptosis and autophagy in cancer cells and has been shown to inhibit B16F10 cell mobility in trans-well assays. Real-time PCR results indicate that DMGF also reduces the expression of matrix metalloproteinase-2 (MMP-2) and decreases vascular density of tumors in vivo . Its anti-metastatic effect partly originates from the downregulation of the Cdc42/Rac1 pathway, affecting F-actin aggregation and reducing CREB phosphorylation, inhibiting pseudopodia formation . Additionally, polyprenols can be isolated from the seeds of Taxus × media . Polyprenols are known for their various pharmacological activities, chiefly their anticancer properties. They are identified using high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing cancer cells to undergo programmed cell death (apoptosis). Paclitaxel, the principal antitumor component of Taxus × media , is a diterpenoid compound. Due to its concentration in the bark and leaves, it has been extensively harvested and used in various traditional medical systems to treat many diseases . Paclitaxel inhibits tumor cell mitosis and proliferation by promoting microtubule stabilization. This mechanism of action has led to its widespread use in the treatment of various types of cancer, including ovarian, breast, Kaposi’s sarcoma, and lung cancer . Since its FDA approval in 1992, paclitaxel has been recognized globally as an anticancer drug . Among the many taxane compounds with anticancer activity, paclitaxel is the most potent, with an IC50 value of 2.5–7.5 nM . In vitro, paclitaxel has shown significant growth inhibition against transplantable tumors P388, L1210, P1534 leukemia cells, and human ovarian cancer cells, while the in vivo experiments have demonstrated its potent activity against B16 melanoma and MX1 breast cancer and activity against LX-1 lung cancer, CX-1 colon cancer, P388 leukemia, Lewis lung cancer, and Sarcoma S180 . A series of hydroxylation steps of the taxane core is essential for functionalized taxane compounds, primarily mediated by cytochrome P450 enzyme-mediated oxidation. Paclitaxel is produced mainly through semi-synthesis from precursors (such as baccatin III) more readily obtained from various yew species . Recent research on Taxus × media cell culture has revealed the potential of coronatine (COR) and coronafacic acid (CAL) to increase the production of taxanes, especially paclitaxel. The study showed that the combined use of COR and CAL significantly increased the total yield of paclitaxel and promoted its excretion into the culture medium, indicating the bi-sustainability and economic feasibility of this method in enhancing paclitaxel production . Cephalomannine is primarily derived from the twigs and leaves of Taxus × media , where it is present in high concentrations and exhibits notable activity. Cephalomannine, found in high concentrations in the twigs and leaves of Taxus × media , shows strong anticancer potential, particularly against breast cancer MCF-7 cells, with a dose-dependent IC50 value of 0.86 µg/m . It also effectively inhibits P388 lymphocytic leukemia, highlighting its broad-spectrum anticancer activity . Cephalomannine was found to effectively inhibit the progression of bladder cancer in various experimental models, including cultured cell lines, organoids, and an in vivo model of lymphatic metastasis. Importantly, this inhibition occurred with no significant reported toxicity. These results suggest that cephalomannine, through its impact on UBE2S, holds promise as a treatment for bladder cancer, particularly in cases prone to lymphatic metastasis. This could lead to new clinical approaches for managing bladder cancer, especially for patients at high risk of metastatic disease . In a previous study, male BALB/C nude mice were used as the animal model, with human lung cancer H460 cells subcutaneously implanted to simulate lung cancer. The drug used was cephalomannine, administered at a dosage of 0.4 mg/kg via intraperitoneal injection. The results indicated that cephalomannine significantly reduced tumor volume and weight. No significant loss was detected in the body and organ weights of the experimental animals, suggesting that cephalomannine significantly suppressed the growth of lung cancer cell xenografts but had no major side effects in the mice . Taxinine, primarily found in the twigs and leaves of Taxus × media , contains various derivatives. Taxinine A exhibits cytotoxic effects on breast cancer, colon cancer, and oral squamous carcinoma cells . With an IC50 value of 5.336 µg/mL, it significantly reduces MCF-7 cell proliferation after 72 h, demonstrating both time and dose dependency, albeit less potent than paclitaxel . Although cephalomannine and taxinine have shown potential anticancer effects, these compounds have not yet been officially registered by the FDA or any other drug regulatory agency. 10-DAB, an effective anticancer compound, has significantly inhibited various cancer cell lines . This class of compounds can inhibit the proliferation of many cancer cell lines and exert antitumor effects by inhibiting bone marrow-derived suppressor cells’ accumulation and suppressive function . 10-DAB-treated MCF-7 cells, at a concentration of 5.446 µg/mL for 24 h, significantly inhibited cell proliferation with an inhibition rate of 44.8%. After 72 h of treatment, the inhibition rate was increased to 49.6% . A study explored the effects of 10-DAB on tumor growth in mice infected with the Moloney murine sarcoma virus. The research utilized male NMRI mice, which were injected intramuscularly with the virus to induce tumor growth. Following the infection, the mice were treated intraperitoneally with 100 µg of 10-DAB on the first three days post infection. The results showed that while 10-DAB did not prevent tumor formation, it significantly reduced the size of the tumors compared to the control group. Specifically, 10-DAB reduced the mean diameter of the tumors by 35% compared to the control group, demonstrating its potential antitumor activity . In experiments with female CDF1 mice, IDN 5390, a derivative of 10-DAB, was tested both orally and intravenously to assess its pharmacokinetic properties. Administered in doses ranging from 60 to 120 mg/kg, IDN 5390 demonstrated strong oral bioavailability at 43% for the lowest dose, though this decreased with higher doses. The drug was quickly absorbed, showing peak plasma concentrations within 15 to 30 min, and was widely distributed in vital organs like the liver, kidneys, and heart. Interestingly, its concentration in the brain remained elevated longer than in other tissues, suggesting potential utility in treating brain tumors . Daily treatment with IDN 5390 in mice bearing established lung micrometastases from the B16BL6 murine melanoma caused a reduction in the size of metastases . Baccatin III derivatives are precursors for the semi-synthesis of paclitaxel. Baccatin III has been found to have inhibitory effects on bleomycin A5-induced rat pulmonary fibrosis and can be used in tumor chemotherapy . Additionally, it can synthesize Taxotere, a compound with higher anticancer activity. B16 melanoma shows high sensitivity to Taxotere . Taxotere has shown positive effects in treating early-stage pancreatic ductal adenocarcinoma and colon adenocarcinoma, achieving multiple cures in early pancreatic ductal adenocarcinoma and colon adenocarcinoma. Taxotere also achieved an over 80% complete remission rate in the advanced stages of these two tumors . In the in vitro experiments, oral administration of baccatin III significantly reduced the growth of tumors induced by engrafting BALB/c mice with either four T1 mammary carcinoma or CT26 colon cancer cells, and baccatin III decreased the accumulation of MDSCs in the spleens of the tumor-bearing mice . Furthermore, 7-Xylosyl-10-deacetyltaxol shows effective inhibitory action on various tumor cell lines, with an IC50 value of 0.3776 µg/mL against the breast cancer cell line MCF-7 and an IC50 value of 0.86 µg/mL against the colon cancer cell lines, indicating its high efficacy against these cancer cell lines . At present, the only taxanes are paclitaxel and the semi-synthetic drug taxotere, with baccatin III or 10-DAB as their precursor. Paclitaxel and taxotere have a wide antibacterial spectrum and are effective against a variety of drug-resistant tumor cell lines. They are mainly used in the single-drug treatment of ovarian cancer, breast cancer, small cell and non-small cell lung cancer, neck cancer, and also have significant effects on esophageal cancer, nasopharyngeal cancer, bladder cancer, lymphatic cancer, prostate cancer, malignant melanoma cancer, and gastrointestinal cancer . In addition to taxane compounds like paclitaxel, Taxus × media contains other substances with anticancer activity, such as flavonoid compounds. For instance, Apigenin exhibits significant antitumor activity . It works through multiple mechanisms, including inducing apoptosis, regulating the cell cycle, and inhibiting cancer cell migration and invasion. Apigenin has been shown to interact with several cellular signaling pathways, such as PI3K/AKT/mTOR and MAPK/ERK , which are crucial in cancer treatment. Further research has also revealed the potential of 7,7”-dimethoxyagastisflavone (DMGF), extracted from Taxus × media cv. Hicksii, in inhibiting cancer cell proliferation. DMGF can induce apoptosis and autophagy in cancer cells and has been shown to inhibit B16F10 cell mobility in trans-well assays. Real-time PCR results indicate that DMGF also reduces the expression of matrix metalloproteinase-2 (MMP-2) and decreases vascular density of tumors in vivo . Its anti-metastatic effect partly originates from the downregulation of the Cdc42/Rac1 pathway, affecting F-actin aggregation and reducing CREB phosphorylation, inhibiting pseudopodia formation . Additionally, polyprenols can be isolated from the seeds of Taxus × media . Polyprenols are known for their various pharmacological activities, chiefly their anticancer properties. They are identified using high-performance liquid chromatography/mass spectrometry (HPLC/MS) . The results indicate that the content of TPs (taxane-based polysaccharides) in the seeds is as high as 3%, making them an alternative plant source for extracting polyprenols. Polyprenol compounds may inhibit tumor growth by inducing cancer cells to undergo programmed cell death (apoptosis). The extracts of Taxus × media have been extensively studied and found to contain a diverse array of biologically active compounds. These include not only well-known molecules like paclitaxel but also a variety of flavonoids, terpenes, organic acids, and amino acids, all contributing to the extract’s pharmacological profile. Flavonoids and terpenes have been spotlighted for their potent anticancer properties. Recent studies using advanced analytical techniques, such as gas chromatography–mass spectrometry (GC–MS) and high-performance liquid chromatography (HPLC), have provided deeper insights into the complex composition of these extracts. These analyses reveal the presence of multiple active components that may work in concert to exert anticancer effects, suggesting a potential synergistic interaction among these compounds. A study using gas chromatography–mass spectrometry (GC–MS) analyzed and identified compounds in the leaves of Taxus × media , exploring their potential biological activities. The study identified 20 compounds with significant bioactivity, mainly flavonoids, terpenes, organic acids, amino acids and their derivatives, and alcohols. Tests showed that the extracts of Taxus × media displayed notable anticancer properties . The observed pharmacological activities are not solely attributable to the action of paclitaxel. While paclitaxel plays a significant role due to its well-documented anticancer efficacy, the contribution of other compounds such as the flavonoids and terpenes may enhance or complement the anticancer activity through various mechanisms . For instance, some flavonoids have been shown to induce apoptosis and inhibit angiogenesis in tumor cells, while terpenes might disrupt cellular processes critical for cancer cell survival and proliferation . The suggestion of a synergistic effect is particularly intriguing and warrants further investigation. Preliminary in vitro studies indicate that these extracts can inhibit the growth of various cancer cell lines more effectively than would be expected from the activity of paclitaxel alone. This synergy could be due to multiple compounds targeting different pathways involved in cancer progression, thereby increasing the overall anticancer efficacy of the extract . Given these promising findings, it is crucial to pursue further clinical studies to explore the therapeutic potential of these extracts. Such studies could provide vital data on the efficacy and safety of the extracts, paving the way for their potential use as comprehensive anticancer treatments. These clinical investigations are essential to validate the anticancer activities observed in preclinical models and to assess the therapeutic viability of using Taxus × media extracts in oncology. Taxus × media also demonstrates significant activity in combating microbial pathogens. Zhang et al. tested the antibacterial properties of essential oils extracted from fresh leaves of Taxus × media . They used the paper disc agar diffusion method and the minimum inhibitory concentration method to assess its inhibitory effects on microbes. The results showed that the essential oil of Taxus × media significantly inhibits and kills microbes such as Staphylococcus aureus and Escherichia coli . This antibacterial activity is due to the combined effects of various compounds in the essential oil. The main components in the essential oil of Taxus × media include cis -3-hexen-1-ol and pentenyl ethyl alcohol, which are considered to play an essential protective role during the plant’s growth and effectively inhibit bacterial proliferation. Additionally, compounds like benzaldehyde have been proven to have sound inhibitory effects on various microbes. Compared to the volatile oil from the leaves of Taxus mairei , the volatile oil from Taxus × media leaves show more robust antibacterial characteristics. Dar and others have extensively explored the antibacterial activity of ten different solvent extracts from the leaves against various bacteria, such as B. pumilis , Staphylococcus aureus , Pseudomonas aeruginosa , and Escherichia coli , with significant results . Recent research has revealed the potential of endophytic fungi in medical development, especially in producing biologically active compounds. For instance, endophytic fungi isolated from Taxus × media , including Graminicolous helminthosporium , Bipolaris australiensis , and Cladosporium cladosporioides , were found to produce various bioactive compounds. These include anthraquinones, barbiturates, benzopyrroles, ethyl quinolines, etc. These endophytic fungi showed significant antifungal effects. Evaluating healthy diffusion methods for agar revealed vigorous antifungal activities in both intracellular and extracellular extracts from these fungi. Notably, endophytic fungi from Fujian Province, China, exhibited significant inhibitory capabilities against pathogenic fungi like Neurospora sp., Trichoderma sp., and Fusarium sp. Among them, fungi from the Paecilomyces sp. showed the highest positive rate of antifungal activity . These substances are effective against various pathogens, including those that have developed resistance to antibiotics . The extracts of Taxus × media have shown potential in treating diabetes and its related complications in recent research. Taxus × media demonstrates significant anti-hyperglycemic activity . A study using C57BL/6 mice fed with a high-fat diet as a model investigated the effect of Taxus × media extract on insulin resistance. The results indicated that the ethyl acetate extract (Tw-EA) of Taxus × media significantly reduced blood glucose levels, decreased the production of inflammatory cytokines, and reduced weight gain. This suggests that Taxus × media has therapeutic effects against inflammation-induced insulin resistance. The study also found that Tw-EA treatment reduced lipid accumulation in adipocytes and decreased the infiltration of inflammatory cells in skeletal muscle and adipose tissue, thereby improving the insulin resistance status . Taxus × media contains paclitaxel, which is involved in some extraction and separation processes, and detection methods, and shows specific glucose-lowering effects. Its mechanism of action is different from the currently marketed diabetes treatments. It can restore the damaged pancreatic islet system in diabetic patients, potentially becoming a new direction in developing diabetes treatment drugs . Dai et al. conducted a study on the glucose-lowering effects of its extract, Sequoyitol . By establishing a type 2 diabetes rat model, it was found that Sequoyitol significantly reduced blood glucose levels in rats. Sequoyitol can significantly inhibit α-glucosidase activity competitively and promote glucose uptake in adipocytes, exerting a blood glucose-lowering effect comparable to 20 mg/kg of Acarbose. A further radioimmunoassay showed that Sequoyitol could reduce the insulin resistance index in rats and promote insulin secretion. p66shcA is a vital antioxidant protein , and RT-PCR studies found that Sequoyitol significantly reduced the expression of rat p66shcA mRNA even at low doses. Immunohistochemistry showed that specific doses of Sequoyitol significantly reduced the expression and phosphorylation levels of the p66shcA protein in rat thoracic aortas. Colorimetric results showed that Sequoyitol reduced the content of malondialdehyde in rat plasma. DHE staining further proved that Sequoyitol significantly inhibited the production of reactive oxygen free radicals in rat aortas , suggesting that Sequoyitol may benefit diabetic cardiovascular complications. The research found that the fruit of Taxus × media exhibits specific anti-hyperglycemic activity. The hypoglycemic effect can be attributed to various bioactive compounds, including flavonoids, which may interact with metabolic pathways related to glucose metabolism. These interactions include affecting insulin secretion, enhancing glucose uptake in peripheral tissues, inhibiting carbohydrate-digesting enzymes, or mimicking insulin action . Significant findings have been made regarding the anti-inflammatory activity of Taxus × media . The active component baccatin III in Taxus × media , known for its antitumor activity, also effectively inhibits rat pulmonary fibrosis induced by BLM. It can alleviate alveolar inflammation and the extent of pulmonary fibrosis in rats ( p < 0.01) and reduce the expression of ERK1. Its mechanism of action is related to improving the abnormal deposition of the extracellular matrix and inhibiting excessive repair of lung injury tissues . A study investigated the analgesic and anti-inflammatory activities of several compounds isolated from the bark extract, including Tasumatrol B, 1,13-Diacetyl-10-deacetylbaccatin (10-DAD), and 4-Deacetylbaccatin (4-DAB). Four hours post-administration, the 95% ethanol extract showed effective anti-inflammatory activity at 200 mg/kg concentration compared to the ether extract and the reference standard aspirin . These compounds were evaluated for their potential effects on analgesia and anti-inflammation, providing further scientific evidence for the traditional medicinal use of Taxus species. Another study explored the anti-inflammatory action of Taxusabietane A extracted from Taxus × media . This study indicated that Taxusabietane A has significant anti-inflammatory activity, which aligns with its use in folk medicine for treating inflammation-related diseases. Taxusabietane A showed vigorous LOX inhibitory activity, with an IC50 value of 57 ± 0.31, and exhibited significant anti-inflammatory effects at 5 and 10 mg/kg . Li et al. conducted DPPH free radical scavenging experiments on the total flavonoid extract from the twigs and leaves of Taxus × media . Comparing the scavenging rates of various concentration gradients of the extract with Vitamin C on DPPH free radicals, it was found that its antioxidant activity increases with the rise in mass concentration. The scavenging rates for DPPH free radicals, ABTS + free radicals, and nitrite were 91.04%, 99.17%, and 65.50%, respectively, indicating that the total flavonoids of Taxus × media twigs and leaves possess strong in vitro antioxidant capabilities. Taxus × media , a valuable plant with medicinal, timber, and ornamental significance and originally from North America, is now widely cultivated globally with successful introductions in regions like Sichuan, China. This plant plays a crucial role in various ecosystems, and its unique growth environment and ecological characteristics are vital for developing effective cultivation and conservation strategies. However, facing resource depletion and increased environmental pressures, this species confronts multiple challenges, including habitat loss, overharvesting, and the impacts of climate change. In the past thirty years, many subpopulations, especially in China, have experienced a decline of over 30%. In countries like India, Nepal, and Vietnam, their conservation status has reached critical and endangered levels . Therefore, understanding its cultivation status, assessing the survival of wild populations, and exploring effective conservation strategies are crucial for its protection. The successful cultivation of Taxus × media is essential for ecological protection and the development of medical resources. Regarding its phytochemical components, the rich bioactive substances in Taxus × media, including paclitaxel and its derivatives, various alkaloids, and flavonoid compounds, are the primary sources of its medicinal value. Taxanes are important bioactive components but most of their contents seem to be very low and even deficient for some taxanes . Current research focuses include the extraction, preparation, analysis, and biological activity assessment of these compounds. Exploring the structure and function of these chemical components is crucial for uncovering the material basis of Taxus × media and developing new drugs. However, a comprehensive analysis of the related literature on Taxus × media indicates the need for more data on the content of these components in the bark and seeds, including many other practical components, necessitating further research. Future studies should focus on the detailed distribution of these critical components in different parts of Taxus × media , which is essential for a better understanding of its material basis and the development of medical resources. Additionally, there is a pressing need to study how the controlled factors in artificial plantation settings—such as watering, nutrition, pest control—affect the metabolite content and seasonal variation of these compounds. Therefore, future studies need to fill this gap by exploring the distribution of these critical components in different parts of Taxus × media , which is essential for a better understanding of its material basis and the development of medical resources. Pharmacological activity is a significant aspect of Taxus × media . Studies have found that Taxus × media ’s potential is not limited to its anticancer effects but further extends to applications in anti-diabetes, anti-inflammatory, antimicrobial, and other fields. Identified taxane and non-taxane components may explain why the extracts of Taxus × media have anticancer activities or others. These studies reveal the action and mechanisms of related compounds in Taxus × media , providing a theoretical basis for new drug development and indicating a new direction for further clinical applications. However, since the extracts contain many kinds of taxane compounds, whether those taxane compounds have similar molecular action mechanisms to paclitaxel remains unclear. Also, aside from anticancer activities, efficiently discovering new activities of the Taxus × media compounds and revealing their mechanisms is an important future research direction. The use of plant extracts containing cytotoxins such as paclitaxel in alternative medicine is an important aspect that deserves attention. Although paclitaxel is a well-established chemotherapeutic agent approved for conventional medical use, its incorporation into alternative medical practices raises significant safety concerns. Paclitaxel possesses potent cytotoxic properties, capable of killing cancer cells at very low concentrations, which also implies potential toxicity to healthy cells if not administered with precise control. This duality underscores the need for careful evaluation and regulation when considering such powerful compounds for use in non-traditional treatments. Our discussion aims to highlight the challenges and responsibilities involved in leveraging such potent pharmacological agents outside of controlled medical settings. If Taxus extracts were clinically used, a necessary preclinical study and strict population or patient screening should be conducted in advance to avoid poisoning accidents. Further research is crucial for developing safer, modified derivatives of these compounds that retain therapeutic efficacy while reducing toxicity, making them more suitable for widespread use in alternative therapies. Advanced extraction methods should be developed to enhance the effectiveness and reduce the toxicity of Taxus -derived compounds. Although the extract contains toxic compounds, e.g., paclitaxel, and should be used with caution, other components in the extract may enhance the efficacy and reduce toxicity of paclitaxel, which is unique in alternative medicine and may happen in the extract of Taxus . However, further validation research is needed. Additionally, in alternative medicine, combining these with other herbs may increase the effectiveness and safety of Taxus formulations. In fact, specific formulas have already been developed in China, but research into more effective and safer Taxus formulations continues. Internationally, Taxus × media and other Asian yew species are listed in Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) to regulate their international trade. This species enjoys first-class protection in China and appears in essential nature reserves, such as the Tangjiahe National Nature Reserve in Sichuan. Moreover, to ensure future sustainable harvests, China and other countries like Nepal, Bhutan, and Vietnam have invested significantly in establishing plantations. However, due to the slow growth of yew plants and their rarity, the insufficient supply of raw materials for paclitaxel production has become an urgent issue. Current cutting-edge techniques can rapidly expand the area of artificial cultivation. New chemical synthesis techniques for semi-synthesizing intermediates like 10-DAB from Taxus × media can partially solve the problem of relatively low paclitaxel content in the plant. Additionally, techniques for producing paclitaxel and its precursors through endophytic fungi or in vitro cell cultures are being developed, which could partially address the dependency on yeast cultivation. However, these technologies still need to mature and are costly. Therefore, finding new plant resources and developing new technologies to increase paclitaxel production is one of the current challenges. As we advance our understanding of Taxus × media , it becomes imperative to integrate climate resilience into conservation strategies. The susceptibility of Taxus to increased temperatures might reduce their natural range, necessitating the exploration of adaptive cultivation techniques that can withstand broader environmental variations. Furthermore, research into genetic variations across different Taxus populations could identify strains with higher resilience to climate stressors, thereby guiding conservation and breeding programs to focus on these more robust specimens. In summary, as a plant rich in bioactive compounds, research on the origin and cultivation of Taxus × media , its phytochemical components, and pharmacological activity is of great significance for new drug development and biodiversity conservation. The development of yew is a systemic project involving multiple disciplines and fields, including agricultural cultivation, environmental protection, active component extraction and identification, biological activity research, and medical applications. Strengthening the protection of this precious species, formulating and implementing more effective conservation measures, and achieving sustainable development will be vital to ensuring its biodiversity and ecological balance. Notably, future research should focus on paclitaxel and the comprehensive utilization of its various components in different medical fields, warranting further anticipation and attention. |
Evaluation of the Impacts of a Phone Warning and Advising System for Individuals Vulnerable to Smog. Evidence from a Randomized Controlled Trial Study in Canada | 66bdee65-fef3-4507-8518-1d6596a14817 | 6571566 | Health Communication[mh] | Smog episodes are severe air pollution periods characterized by a mixture of smoke and fog resulting from natural and/or anthropogenic factors . The frequency and intensity of smog episodes is presumed to increase with global warming and urbanization, which represents a major concern for public health authorities, notably in the current context marked by an increase in the proportion of seniors that are particularly vulnerable to this hazard . According to a recent report of the World Health Organization (WHO), air pollution is already the biggest environmental threat to population health as it is the cause of approximatively 10% of deaths . In addition to seniors, individuals with cardiovascular and respiratory diseases as well as children are more at risk of suffering from this hazard . Efforts to curb greenhouse gas emissions are presently at the core of governments’ strategies to contain global warming as evidenced by the Paris Agreement on Climate Change . In addition to mitigation plans, governments also rely on adaptation strategies to limit the impacts of air pollution on population health such is the case of the Quebec’s Government Action Plan on Climate Change 2013–2020 . As part of their adaptation strategies, many countries have implemented air quality monitoring and warning systems . These systems are designed to issue alerts when pollution reaches levels representing significant risk to population health . In addition to providing information on the occurrence of smog episodes, alerts are accompanied by advice to reduce the risks and consequences of exposure to air pollution . The underlying assumption of smog warning systems is that the exposure to alerts improves information on the occurrence of smog episodes, their risks as well as on protective behaviors. This improvement would then lead to adopting recommended behaviors and, ultimately, mitigating the adverse effects of smog on population health and reducing health services use . In parallel to the implementation of air quality warning systems, interest in the evaluation of their performance has increased in recent years. In accordance with the logic model of smog warning systems, special attention was paid to their impacts on the adoption of protective behaviors , reduction in air pollution related morbidity and mortality and use of health system services . In addition, a variety of impact evaluation methods have been used, including the regression discontinuity design , quasi-experimental design and self-reported outcomes . The results of these studies show that air quality warnings improve adherence to protective behaviors and that the magnitude of this effect is influenced by psychological factors such as the perception of air pollution levels and risks . However, evidence of smog alerts effects on morbidity and mortality, and on the use of health systems is mixed. Chen et al. found a reduction in the number of emergency admissions as a result of the implementation of an air quality program in Ontario (Canada), but no effect on smog related mortality like cardiovascular and respiratory-related deaths. Likewise, McLaren and Williams found that air quality alerts were not correlated with the number of daily hospital admissions . Finally, the Lyon and collaborators study shows that exposure to air pollution alerts even resulted in a substantial increase in hospital admissions, a quite unexpected outcome considering that reduction of pressure on the health system is among the main goals of these warning and advice measures . The objective of this study is to evaluate the impacts of an automated phone smog warning system (APWS) for individuals vulnerable to this hazard. Its contribution to this field of research is two-fold. First, it is so far among the rare if not the only study using an experimental design to assess the effects of smog warnings. As it is well known, randomization improves significantly the internal validity of impact evaluations. Second, while most studies analyzed the effects of alerts using mass media, the focus of this study is on the effects of automated phone warnings. APWSs have the advantage of enabling the delivery of personalized smog alerts and advice to targeted individuals while avoiding overcrowding public medias with these warnings . The structure of this article is as follows. In the next section, the study’s material and methods are presented. The is devoted to study findings. This is followed by the discussion of these findings and a conclusion.
The evaluation of the APWS is based on an experimental design in which a sample of study participants were voluntarily recruited and randomly assigned to treatment and control groups (see below). The APWS was programmed to issue automated phone smog warnings along protective advice to treatment group when Environment Canada predicts that the level of air pollution will reach levels considered prejudicial to population health. To assess its impacts, data on outcome variables were collected when the actual level of air pollution was equal or higher than the threshold triggering smog warnings (i.e., a true alert). Comparisons between treatment and control on outcome variables was used to assess the effects of this system. A detailed description of the design of this experiment is provided in the following sections. Ethics Certification: The Protocol of This Study Was Approved by the Ethics Committee of the Institut National DE La Recherche Scientifique: Cer-15-370.2.1. 2.1. The Design of the Intervention The APWS was developed and tested between 2015 and 2017 by the Institut National de Santé Publique du Québec (INSPQ) and the Direction Régionale de la Santé Publique de la Montérégie (DSP Montérégie). It is part of Quebec government efforts to deal with climate change challenges facing this province. The APWS was designed to inform vulnerable individuals on the occurrence of excessive heat and smog episodes and provide them with advice on how to protect themselves from these hazards. Compliance with the recommended behaviors is presumed to mitigate the adverse effects of smog on health and reduce health services use. The results on the impacts of heat warnings were published in a previous issue of this journal . This article thus presents the findings related to the second component of this research project, i.e., the evaluation of the effects of winter smog warnings. Winter smog episodes occur in Canada as a result of using fossil fuel and wood heating systems in periods of low levels of atmospheric dispersion (Government of Canada ) and from usual industrial and transportation sources. These episodes are associated with an excess of morbidity and mortality mostly for individuals suffering from respiratory and cardiovascular diseases . The APWS was designed to issue warning alerts when the forecasted air pollution reaches levels representing a serious threat to the participants in this study. Regarding this, Environment Canada uses Air Health Quality Index (AHQI) to monitor air quality across different regions of Canada . AHQI is calculated based on the relative risk combination of ozone at the ground level (O 3 ), particulate matter (PM 2.5 /PM 10 ) and nitrogen dioxide (NO 2 ). To facilitate the communication of health pollution risks to the population, the index was divided into four levels of health risks: low, moderate, high and very high risk. Environment Canada issues air quality warnings on its website when the predicted risk level is considered moderate, high or very high. These alerts are then disseminated by mass media in Canada such as TV and radio channels. The APWS was programmed to issue smog warnings along with protective tips when Environment Canada forecasts that the risk level of air pollution is moderate or higher on the AHQI’s scale. This level of triggering smog alerts was chosen because the objective of the APWS is to serve individuals that are particularly vulnerable to air pollution. Specifically, the APWS was tested on a sample of individuals having at least one of the following characteristics that, according to scientific literature and a committee of experts formed specifically to advise the authors of this study, are associated with vulnerability to heatwaves and air pollution : Be 65 years old or older; Present a heart or lung medical condition; Suffer from diabetes, kidney failure, mental health disorder or neurological disorder The smog warning message was developed in consultation with experts in public health and after reviewing the relevant scientific publications (in the study of McLaren and Williams for a summary of health advice ). It was then pretested on a small group of individuals vulnerable to smog in order to evaluate its relevance and clarity. In the final version of the message, recipients are advised to adopt the following behaviors during smog episodes: Stay inside as much as possible and close windows. Outside, avoid physical efforts, even the short health walks. If you take medication in the form of aerosol pumps, always keep them with you. For any questions about your health, call Info-Santé at 811. In case of chest pain or difficulty breathing, call 911. The APWS was programmed to issue automatically the prerecorded oral smog alert on the usual landline or mobile phone numbers of treatment group members before the onset of smog episodes. The automated phone message was not modulable according to the severity of smog episodes or vulnerability of study participants. The research team decided to test the performance of the APWS in the winter rather than in the summer season. The reason being that heat waves are frequently accompanied with smog episodes and therefore some advice provided to avoid smog exposure (like stay inside the windows closed), conflicts with tips intended to protect from heat (open the windows at night; frequent cool areas such as pools and beaches). 2.2. Questionnaire Design Phone surveys were used to collect data on participant characteristics such as gender, age, level of education and outcome. In addition, the questionnaire included questions measuring the main outcomes of the APWS. 2.2.1. Improvement of Information on the Occurrence of Smog Episodes, Relevant Adaptation Strategies and Risk Perception Two questions were used to determine whether respondents were informed or not about the smog episode. The first question asked respondents whether they were aware of the episode or not; the second, the date when they became aware of it, i.e., before, during or after smog episode. Data on participants’ knowledge of protective behaviors was collected via an open-ended question on the best ways of protecting themselves from the smog. The number of measures cited by respondents that matched the recommended behavior was then calculated for each respondent. The perception of the adverse effects of smog on health was measured with a five-level scale: 1: smog is not dangerous at all to 5: smog is extremely dangerous to my health. 2.2.2. Adoption of Recommended Behaviors As mentioned earlier, recipients of the warning message were advised by the APWS to adopt the following behaviors during smog episodes: stay indoors with the windows shut, avoid intense outside physical effort and, for individuals using pump devices for respiratory medication, keep these devices always with them. A five-level scale was used to collect information on weather respondents stayed indoor the windows shut longer or less than usual during smog episode (1: much less than usual to 5: much longer than usual). Regarding physical efforts, respondents were first asked if they had made, or not, intense outdoor physical efforts during the smog episode. Those who responded affirmatively were then asked to rate on a five-level scale the length of this activity (1: much less than usual to 5: much longer than usual). Similarly, individuals who take respiratory medication were asked to report on a four-level scale the extent to which they kept the pump devices with them (1: never to 4: all the time). 2.2.3. Mitigation of Health Symptoms Related to Smog and Use of the Health System Services Respondents had to report if they suffered from any of the following symptoms caused by air pollution during the smog episode or within the following two or three days: difficulty in breathing, chest pain, cough and eye irritation. A dichotomous variable was created that takes the value of 1 if an individual reported that he suffered at least one of these symptoms and 0 otherwise. Regarding the use of health services, respondents were asked to indicate whether, during the smog episode or the following two or three days, they had called a nurse, pharmacist or doctor; called 811 (Info Health, personalized info by nurses); were hospitalized; visited an emergency room; consulted a doctor or nurse at a clinic; or consulted a pharmacist. A dichotomous variable was then created taking the value of 1 if the respondent used any of these services and 0 otherwise. The first draft of the questionnaire was reviewed by a panel of five public health experts and pretested on 22 individuals filling the admissibility criteria mentioned above. In addition, the questionnaire was tested by the survey firm for items clarity on a sample of study participants before the beginning of data collection. 2.3. Participants Recruitment, Groups Formation and Data Collection A sample of 1328 participants vulnerable to smog were recruited in this study from the city of Longueuil, Canada in 2015 and randomly assigned to treatment and control groups (for more details on the recruitment process, see Mehiriz et al. ). Participants were fully informed, when agreeing to participate, that they would be randomly assigned to these groups. To avoid between groups contamination, individuals with the same phone number were randomly assigned to the same group. Among the 1328 participants, 662 formed the treatment group and 666 control group. Data on the effects of smog warnings were collected through three phone surveys. The first survey was conducted immediately after participants recruitment, from 25 June to 14 July 2015, to obtain data on the socio-economic and demographic characteristics of all study participants. A response rate of 76% was obtained. The purpose of the second survey was collecting baseline data on the variables measuring the effects of smog alerts. It took place after the smog alert issued by Environment Canada for 8 January 2016. A total of 770 interviews were conducted, which corresponds to a 76.3% response rate. As the objective of this survey was to obtain ex ante measures, no smog warning was issued by the APWS on that date. Environment Canada also issued a smog alert in Longueuil area on its website for the period of 5 and 6 March 2016. In accordance with the protocol of this study, a smog alert was sent by the APWS to the treatment group members only. Data on the effects of this alert was then collected through the third survey that took place between 7 and 8 March 2016. A total of 519 interviews were conducted, which corresponds to a response rate of 67.4%. Data on participants who were outside the Longueuil region during the smog episodes were not collected because the smog warnings were less relevant to their situation. The Consort chart of this experiment is presented in 2.4. Data Analysis Frequencies were used in this study to describe the distributions of binary and ordinal variables. Odds ratios were also used for these variables to compare treatment and control groups and then estimate the impacts of exposure to smog warnings. The odds ratios were obtained by running binary and ordered logistic regressions in STATA (StataCorp LP., College Station, TX, USA). A continuous variable was used to measure participants’ knowledge of coping strategies. A t -test of means difference for independent samples was then used to estimate the effect of smog alert on this variable.
The APWS was developed and tested between 2015 and 2017 by the Institut National de Santé Publique du Québec (INSPQ) and the Direction Régionale de la Santé Publique de la Montérégie (DSP Montérégie). It is part of Quebec government efforts to deal with climate change challenges facing this province. The APWS was designed to inform vulnerable individuals on the occurrence of excessive heat and smog episodes and provide them with advice on how to protect themselves from these hazards. Compliance with the recommended behaviors is presumed to mitigate the adverse effects of smog on health and reduce health services use. The results on the impacts of heat warnings were published in a previous issue of this journal . This article thus presents the findings related to the second component of this research project, i.e., the evaluation of the effects of winter smog warnings. Winter smog episodes occur in Canada as a result of using fossil fuel and wood heating systems in periods of low levels of atmospheric dispersion (Government of Canada ) and from usual industrial and transportation sources. These episodes are associated with an excess of morbidity and mortality mostly for individuals suffering from respiratory and cardiovascular diseases . The APWS was designed to issue warning alerts when the forecasted air pollution reaches levels representing a serious threat to the participants in this study. Regarding this, Environment Canada uses Air Health Quality Index (AHQI) to monitor air quality across different regions of Canada . AHQI is calculated based on the relative risk combination of ozone at the ground level (O 3 ), particulate matter (PM 2.5 /PM 10 ) and nitrogen dioxide (NO 2 ). To facilitate the communication of health pollution risks to the population, the index was divided into four levels of health risks: low, moderate, high and very high risk. Environment Canada issues air quality warnings on its website when the predicted risk level is considered moderate, high or very high. These alerts are then disseminated by mass media in Canada such as TV and radio channels. The APWS was programmed to issue smog warnings along with protective tips when Environment Canada forecasts that the risk level of air pollution is moderate or higher on the AHQI’s scale. This level of triggering smog alerts was chosen because the objective of the APWS is to serve individuals that are particularly vulnerable to air pollution. Specifically, the APWS was tested on a sample of individuals having at least one of the following characteristics that, according to scientific literature and a committee of experts formed specifically to advise the authors of this study, are associated with vulnerability to heatwaves and air pollution : Be 65 years old or older; Present a heart or lung medical condition; Suffer from diabetes, kidney failure, mental health disorder or neurological disorder The smog warning message was developed in consultation with experts in public health and after reviewing the relevant scientific publications (in the study of McLaren and Williams for a summary of health advice ). It was then pretested on a small group of individuals vulnerable to smog in order to evaluate its relevance and clarity. In the final version of the message, recipients are advised to adopt the following behaviors during smog episodes: Stay inside as much as possible and close windows. Outside, avoid physical efforts, even the short health walks. If you take medication in the form of aerosol pumps, always keep them with you. For any questions about your health, call Info-Santé at 811. In case of chest pain or difficulty breathing, call 911. The APWS was programmed to issue automatically the prerecorded oral smog alert on the usual landline or mobile phone numbers of treatment group members before the onset of smog episodes. The automated phone message was not modulable according to the severity of smog episodes or vulnerability of study participants. The research team decided to test the performance of the APWS in the winter rather than in the summer season. The reason being that heat waves are frequently accompanied with smog episodes and therefore some advice provided to avoid smog exposure (like stay inside the windows closed), conflicts with tips intended to protect from heat (open the windows at night; frequent cool areas such as pools and beaches).
Phone surveys were used to collect data on participant characteristics such as gender, age, level of education and outcome. In addition, the questionnaire included questions measuring the main outcomes of the APWS. 2.2.1. Improvement of Information on the Occurrence of Smog Episodes, Relevant Adaptation Strategies and Risk Perception Two questions were used to determine whether respondents were informed or not about the smog episode. The first question asked respondents whether they were aware of the episode or not; the second, the date when they became aware of it, i.e., before, during or after smog episode. Data on participants’ knowledge of protective behaviors was collected via an open-ended question on the best ways of protecting themselves from the smog. The number of measures cited by respondents that matched the recommended behavior was then calculated for each respondent. The perception of the adverse effects of smog on health was measured with a five-level scale: 1: smog is not dangerous at all to 5: smog is extremely dangerous to my health. 2.2.2. Adoption of Recommended Behaviors As mentioned earlier, recipients of the warning message were advised by the APWS to adopt the following behaviors during smog episodes: stay indoors with the windows shut, avoid intense outside physical effort and, for individuals using pump devices for respiratory medication, keep these devices always with them. A five-level scale was used to collect information on weather respondents stayed indoor the windows shut longer or less than usual during smog episode (1: much less than usual to 5: much longer than usual). Regarding physical efforts, respondents were first asked if they had made, or not, intense outdoor physical efforts during the smog episode. Those who responded affirmatively were then asked to rate on a five-level scale the length of this activity (1: much less than usual to 5: much longer than usual). Similarly, individuals who take respiratory medication were asked to report on a four-level scale the extent to which they kept the pump devices with them (1: never to 4: all the time). 2.2.3. Mitigation of Health Symptoms Related to Smog and Use of the Health System Services Respondents had to report if they suffered from any of the following symptoms caused by air pollution during the smog episode or within the following two or three days: difficulty in breathing, chest pain, cough and eye irritation. A dichotomous variable was created that takes the value of 1 if an individual reported that he suffered at least one of these symptoms and 0 otherwise. Regarding the use of health services, respondents were asked to indicate whether, during the smog episode or the following two or three days, they had called a nurse, pharmacist or doctor; called 811 (Info Health, personalized info by nurses); were hospitalized; visited an emergency room; consulted a doctor or nurse at a clinic; or consulted a pharmacist. A dichotomous variable was then created taking the value of 1 if the respondent used any of these services and 0 otherwise. The first draft of the questionnaire was reviewed by a panel of five public health experts and pretested on 22 individuals filling the admissibility criteria mentioned above. In addition, the questionnaire was tested by the survey firm for items clarity on a sample of study participants before the beginning of data collection.
Two questions were used to determine whether respondents were informed or not about the smog episode. The first question asked respondents whether they were aware of the episode or not; the second, the date when they became aware of it, i.e., before, during or after smog episode. Data on participants’ knowledge of protective behaviors was collected via an open-ended question on the best ways of protecting themselves from the smog. The number of measures cited by respondents that matched the recommended behavior was then calculated for each respondent. The perception of the adverse effects of smog on health was measured with a five-level scale: 1: smog is not dangerous at all to 5: smog is extremely dangerous to my health.
As mentioned earlier, recipients of the warning message were advised by the APWS to adopt the following behaviors during smog episodes: stay indoors with the windows shut, avoid intense outside physical effort and, for individuals using pump devices for respiratory medication, keep these devices always with them. A five-level scale was used to collect information on weather respondents stayed indoor the windows shut longer or less than usual during smog episode (1: much less than usual to 5: much longer than usual). Regarding physical efforts, respondents were first asked if they had made, or not, intense outdoor physical efforts during the smog episode. Those who responded affirmatively were then asked to rate on a five-level scale the length of this activity (1: much less than usual to 5: much longer than usual). Similarly, individuals who take respiratory medication were asked to report on a four-level scale the extent to which they kept the pump devices with them (1: never to 4: all the time).
Respondents had to report if they suffered from any of the following symptoms caused by air pollution during the smog episode or within the following two or three days: difficulty in breathing, chest pain, cough and eye irritation. A dichotomous variable was created that takes the value of 1 if an individual reported that he suffered at least one of these symptoms and 0 otherwise. Regarding the use of health services, respondents were asked to indicate whether, during the smog episode or the following two or three days, they had called a nurse, pharmacist or doctor; called 811 (Info Health, personalized info by nurses); were hospitalized; visited an emergency room; consulted a doctor or nurse at a clinic; or consulted a pharmacist. A dichotomous variable was then created taking the value of 1 if the respondent used any of these services and 0 otherwise. The first draft of the questionnaire was reviewed by a panel of five public health experts and pretested on 22 individuals filling the admissibility criteria mentioned above. In addition, the questionnaire was tested by the survey firm for items clarity on a sample of study participants before the beginning of data collection.
A sample of 1328 participants vulnerable to smog were recruited in this study from the city of Longueuil, Canada in 2015 and randomly assigned to treatment and control groups (for more details on the recruitment process, see Mehiriz et al. ). Participants were fully informed, when agreeing to participate, that they would be randomly assigned to these groups. To avoid between groups contamination, individuals with the same phone number were randomly assigned to the same group. Among the 1328 participants, 662 formed the treatment group and 666 control group. Data on the effects of smog warnings were collected through three phone surveys. The first survey was conducted immediately after participants recruitment, from 25 June to 14 July 2015, to obtain data on the socio-economic and demographic characteristics of all study participants. A response rate of 76% was obtained. The purpose of the second survey was collecting baseline data on the variables measuring the effects of smog alerts. It took place after the smog alert issued by Environment Canada for 8 January 2016. A total of 770 interviews were conducted, which corresponds to a 76.3% response rate. As the objective of this survey was to obtain ex ante measures, no smog warning was issued by the APWS on that date. Environment Canada also issued a smog alert in Longueuil area on its website for the period of 5 and 6 March 2016. In accordance with the protocol of this study, a smog alert was sent by the APWS to the treatment group members only. Data on the effects of this alert was then collected through the third survey that took place between 7 and 8 March 2016. A total of 519 interviews were conducted, which corresponds to a response rate of 67.4%. Data on participants who were outside the Longueuil region during the smog episodes were not collected because the smog warnings were less relevant to their situation. The Consort chart of this experiment is presented in
Frequencies were used in this study to describe the distributions of binary and ordinal variables. Odds ratios were also used for these variables to compare treatment and control groups and then estimate the impacts of exposure to smog warnings. The odds ratios were obtained by running binary and ordered logistic regressions in STATA (StataCorp LP., College Station, TX, USA). A continuous variable was used to measure participants’ knowledge of coping strategies. A t -test of means difference for independent samples was then used to estimate the effect of smog alert on this variable.
3.1. Sample Characteristics and Baseline Differences on sample characteristics indicates that most participants are women, senior individuals and persons with chronic medical conditions. The comparison between treatment and control groups shows the absence of statistically significant differences except for annual household income where the proportion of individuals with less than 25,000 dollars is slightly higher in the treatment group. provides baseline data on the outcomes measured using binary and ordinal variables. Data were collected after the first smog episode during which the APWS did not issue any alert to treatment group members. It indicates the absence of statistically significant differences between treatment and control groups for all the outcome variables of this table. The two groups were also equivalent regarding the number of smog protection measures cited (mean difference = 0.02; p = 0.46). Data on sample characteristics and baseline differences suggest that the randomization process ensured the equivalence between the treatment and control groups and, as a result, there was no need for further controls of preexisting differences. 3.2. Impacts of the APWS Data on the effects of the APWS were collected after the smog alert of 7 March and 8 March 2016. The main results are summarized in , and . 3.2.1. Effect on Information on the Occurrence of Smog Episodes, Knowledge about Coping Strategies, and Risk Perception shows that half of the participants (49.7%) were informed on the occurrence of the smog episode. We noticed also a strong difference between the experimental and control groups. The proportion of individuals who were informed on the occurrence of the smog episode amounts to 70.5% in the case of the first group against 30.1% for the second group. As indicated in , this difference corresponds to an odds ratio of 5.58. We observed also a slight difference between treatment and control groups regarding the number of protection measures cited. Members of the experimental group cited an average of one out of three recommended behaviors against 0.91 for the control group ( p = 0.01). indicates that 41.1% of respondents consider that smog is somewhat dangerous to their health, which suggests a moderate perception of smog risks. In addition, the perception of risk level is not affected by exposure to the APWS’ smog alerts as the odds ratio between treatment and control groups is not statistically significant (OR = 1.15, p = 0.40). This result could probably be explained by the fact that the warning message did not provide information on smog risks. 3.2.2. Adoption of Recommended Behaviors Concerning respondents’ behavior during the smog episode, 80% of the respondents reported that they stayed indoor the windows closed as much as usual . Experimental group members, however, were more likely to adhere to this advice compared to control group (OR = 2.03, p = 0.00). They also kept medication on them more often than usual comparatively to the control group (OR = 2.15, p = 0.03). The analysis of the APWS’ effect on physical efforts shows the absence of differences between the two groups (OR = 0.59, p = 0.22). This advice seems to be less relevant for the participants in this study that, given their health conditions, only a very small proportion of them (5%) made intense physical efforts during winter smog episodes . 3.2.3. Mitigation of Health Symptoms Related to Smog and the Use of the Health System Data on study participants indicate that approximatively 30% of respondents suffered smog related symptoms and 6.5% used health system services during the second smog episode . The comparison between the experimental group and control groups suggest that smog alerts did not have an impact on these variables in this context. As shown in , the odds ratio of the first variable is equal to 1.05 ( p = 0.81) and that of the second variable is equal to 1.03 ( p = 0.92). This study suggests that the APWS allow recipients to be better informed on the occurrence of smog episodes and improve their knowledge about coping strategies. Smog alerts also seem to increase significantly compliance with two of the three recommended behaviors. The APWS however does not seem to raise awareness of smog risks, alleviate the symptoms related to this hazard and reduce the use of the health system.
on sample characteristics indicates that most participants are women, senior individuals and persons with chronic medical conditions. The comparison between treatment and control groups shows the absence of statistically significant differences except for annual household income where the proportion of individuals with less than 25,000 dollars is slightly higher in the treatment group. provides baseline data on the outcomes measured using binary and ordinal variables. Data were collected after the first smog episode during which the APWS did not issue any alert to treatment group members. It indicates the absence of statistically significant differences between treatment and control groups for all the outcome variables of this table. The two groups were also equivalent regarding the number of smog protection measures cited (mean difference = 0.02; p = 0.46). Data on sample characteristics and baseline differences suggest that the randomization process ensured the equivalence between the treatment and control groups and, as a result, there was no need for further controls of preexisting differences.
Data on the effects of the APWS were collected after the smog alert of 7 March and 8 March 2016. The main results are summarized in , and . 3.2.1. Effect on Information on the Occurrence of Smog Episodes, Knowledge about Coping Strategies, and Risk Perception shows that half of the participants (49.7%) were informed on the occurrence of the smog episode. We noticed also a strong difference between the experimental and control groups. The proportion of individuals who were informed on the occurrence of the smog episode amounts to 70.5% in the case of the first group against 30.1% for the second group. As indicated in , this difference corresponds to an odds ratio of 5.58. We observed also a slight difference between treatment and control groups regarding the number of protection measures cited. Members of the experimental group cited an average of one out of three recommended behaviors against 0.91 for the control group ( p = 0.01). indicates that 41.1% of respondents consider that smog is somewhat dangerous to their health, which suggests a moderate perception of smog risks. In addition, the perception of risk level is not affected by exposure to the APWS’ smog alerts as the odds ratio between treatment and control groups is not statistically significant (OR = 1.15, p = 0.40). This result could probably be explained by the fact that the warning message did not provide information on smog risks. 3.2.2. Adoption of Recommended Behaviors Concerning respondents’ behavior during the smog episode, 80% of the respondents reported that they stayed indoor the windows closed as much as usual . Experimental group members, however, were more likely to adhere to this advice compared to control group (OR = 2.03, p = 0.00). They also kept medication on them more often than usual comparatively to the control group (OR = 2.15, p = 0.03). The analysis of the APWS’ effect on physical efforts shows the absence of differences between the two groups (OR = 0.59, p = 0.22). This advice seems to be less relevant for the participants in this study that, given their health conditions, only a very small proportion of them (5%) made intense physical efforts during winter smog episodes . 3.2.3. Mitigation of Health Symptoms Related to Smog and the Use of the Health System Data on study participants indicate that approximatively 30% of respondents suffered smog related symptoms and 6.5% used health system services during the second smog episode . The comparison between the experimental group and control groups suggest that smog alerts did not have an impact on these variables in this context. As shown in , the odds ratio of the first variable is equal to 1.05 ( p = 0.81) and that of the second variable is equal to 1.03 ( p = 0.92). This study suggests that the APWS allow recipients to be better informed on the occurrence of smog episodes and improve their knowledge about coping strategies. Smog alerts also seem to increase significantly compliance with two of the three recommended behaviors. The APWS however does not seem to raise awareness of smog risks, alleviate the symptoms related to this hazard and reduce the use of the health system.
shows that half of the participants (49.7%) were informed on the occurrence of the smog episode. We noticed also a strong difference between the experimental and control groups. The proportion of individuals who were informed on the occurrence of the smog episode amounts to 70.5% in the case of the first group against 30.1% for the second group. As indicated in , this difference corresponds to an odds ratio of 5.58. We observed also a slight difference between treatment and control groups regarding the number of protection measures cited. Members of the experimental group cited an average of one out of three recommended behaviors against 0.91 for the control group ( p = 0.01). indicates that 41.1% of respondents consider that smog is somewhat dangerous to their health, which suggests a moderate perception of smog risks. In addition, the perception of risk level is not affected by exposure to the APWS’ smog alerts as the odds ratio between treatment and control groups is not statistically significant (OR = 1.15, p = 0.40). This result could probably be explained by the fact that the warning message did not provide information on smog risks.
Concerning respondents’ behavior during the smog episode, 80% of the respondents reported that they stayed indoor the windows closed as much as usual . Experimental group members, however, were more likely to adhere to this advice compared to control group (OR = 2.03, p = 0.00). They also kept medication on them more often than usual comparatively to the control group (OR = 2.15, p = 0.03). The analysis of the APWS’ effect on physical efforts shows the absence of differences between the two groups (OR = 0.59, p = 0.22). This advice seems to be less relevant for the participants in this study that, given their health conditions, only a very small proportion of them (5%) made intense physical efforts during winter smog episodes .
Data on study participants indicate that approximatively 30% of respondents suffered smog related symptoms and 6.5% used health system services during the second smog episode . The comparison between the experimental group and control groups suggest that smog alerts did not have an impact on these variables in this context. As shown in , the odds ratio of the first variable is equal to 1.05 ( p = 0.81) and that of the second variable is equal to 1.03 ( p = 0.92). This study suggests that the APWS allow recipients to be better informed on the occurrence of smog episodes and improve their knowledge about coping strategies. Smog alerts also seem to increase significantly compliance with two of the three recommended behaviors. The APWS however does not seem to raise awareness of smog risks, alleviate the symptoms related to this hazard and reduce the use of the health system.
An APWS was used in this study to reach vulnerable groups and inform them about the occurrence of smog episodes and coping strategies. Recipients of smog warnings are supposed to be more informed on the occurrence and risks of smog episodes and have better knowledge on how to protect themselves comparatively to non-recipients. Furthermore, the improvement of information and knowledge is expected to encourage adherence to recommended behaviors which, in turn, results in the reduction of risks of suffering from smog symptoms and of using health system services. The results of this study indicate that automated phone warnings significantly improve information on the occurrence of smog episodes. As mentioned before, the APWS was programmed to send smog alerts to treatment group when Environment Canada issues a smog warning on its website. This substantial difference between the treatment and control groups can thus be considered as measuring the additional effect of the APWS to the effects of the smog warning system already in place in Canada, generally available through the media; at the time of the study it was not available through text or email messages, although it became available very recently. The baseline data of this study suggest that the current Canadian smog warning system has a low capacity of reaching vulnerable groups as only 30% of the respondents were informed of the first smog warning issued by Environment Canada. This is concordant with the results of a 2015 survey in Hamilton, Canada showing that 60% of the respondents were aware of the existence of the AHQI system and only 27% checked it . This low level of coverage should be considered seriously because, contrary to heat waves, smog episodes are difficult to detect by our senses only. The population depends on sophisticated monitoring and warning systems to obtain reliable and timely information on air quality and adapt its behavior consequently. There is thus a real need for developing new smog warning methods that go beyond the simple dissemination of alerts through mass media. UK for instance has implemented an air quality alert system that allows subscribers to receive information on air quality via text, phone call, email or internet . With this regard, our study provides evidence supporting the idea that automated phone warning systems seem to be a promising solution to improve the reach of smog alerts for vulnerable subgroups. This study also indicates that automated phone warning improves adherence to recommended measures of coping with smog episodes, thus confirming the findings of previous studies on this subject. For instance, Wen and Mokdad found that poor air quality alerts result in the reduction of outdoor activities among people with asthma. Likewise, a review of 21 studies by D’Antoni and al. found evidence supporting the idea that exposure to air quality alerts improves compliance with recommended behaviors. However, as the cost of intertemporally substituting activities increases overtime, adherence to smog protective behaviors is likely to decrease with time . This eventuality raises concerns about the performance of warning systems during long smog episodes, as is frequently the case in several countries’ metropolitan areas. This study suggests that the APWS warnings failed to mitigate health symptoms associated with poor air quality as well as to reduce the use of health system services. This finding nurtures the uncertainty surrounding the health benefits of smog warnings in general. In a population-based cohort study, Chen and al. show that the implementation of an air quality alert program in Ontario (Canada) was associated with some reductions in respiratory morbidity, but not with the other health outcomes examined. Lyon and al. even found that smog warnings have the adverse effect of increasing hospital admissions for respiratory conditions as well as emergency department attendance. The authors suspect that this unexpected outcome could be attributed to the fact that warning messages frequently advise recipients to consult health professionals if they suffered smog related symptoms. This seemingly nonexistent or, at best, small contribution of smog alerts to the mitigation of health problems invites to rethink seriously the role of smog warnings systems. Chen et al. call for enforced public actions to reduce air pollution instead of relying on smog alerts only. As is the case of the APWS, some alert systems are implemented with the purpose of protecting senior individuals and those with chronic medical conditions. However, members of this target population do not spend a long time in outdoor activities comparatively to other social groups and, as a result, are often less exposed to smog episodes . The baseline data of our study also show that only 6% of participants reported making outside intense physical efforts during the first smog episode. Given this low-level of risk exposure, improvement in the compliance with the recommended behaviors would not have much effect on the reduction of symptoms related to smog and, therefore, on the use of the health system. Therefore, it seems thus health benefits of smog warnings would be more significant if they targeted individuals with intense and long periods of outdoors activities such as construction and road workers, and sportsmen and sportswomen. The result of this study suggests that the APWS does not increase recipients’ awareness of the adverse effects of smog. Risk perception has been found to be an important determinant of the adoption of protective behaviors . The performance of the APWS could thus be improved by including relevant information on the negative impacts of exposure to smog on population health. Such information may improve recipients’ awareness and, therefore, compliance with recommended behaviors. This study presents some limitations. While the warning system is intended to protect vulnerable groups from smog episodes in general, the scope of this study remains relative to the impacts of winter smog warnings. We should thus be cautious about the generalization of this study findings as the participants in this experiment, given their health conditions, are presumed to substantially reduce their outdoor activities in the winter. They therefore have low exposure to winter air pollution episodes comparatively to those of the summer season. The conclusions are also based on a single two-day smog episode which could limit its generalization potential. Finally, it should be reminded that participants in this study were not randomly selected from a defined population; this could also affect the generalization of findings.
We used in this study an experimental design to measure the effects of an automated phone warning system on individuals vulnerable to smog. The comparison between treatment and control groups shows that exposure to smog warning improves information on the occurrence of smog episodes. Treatment group members also have more knowledge on how to protect themselves from this hazard. They are likewise more likely to adopt the recommended behaviors than members of the control group. The analysis however shows that the system has no discernible effect on the awareness of smog health risks, reduction of symptoms related to smog as well as on the use of health system. The low risk of target population exposure to smog may explain the absence of beneficial health effects of smog warning.
|
Trends in ophthalmology applicants going unmatched in the Canadian Resident Matching Service | 3e7243f4-165e-4bb6-9ac3-cc78f587b991 | 10961129 | Ophthalmology[mh] | The steady increase in unmatched medical graduates has been a concern in recent years, resulting in the criticism of The Canadian Resident Matching Service (CaRMS) and Undergraduate Medical Education (UGME). Studies have explored biases in the resident selection process, as Canadian residency programs rely less on objective measures (e.g., publications or academic performance) than in the United States (US), and more on subjective indices. In the 2022 CaRMS cycle, 22 Canadian applicants whose first choice discipline was ophthalmology went unmatched after the first iteration. This was the highest number of first choice applicants to any surgical specialty going unmatched, while ophthalmology represents less than two percent of available positions each year. CaRMS provides application and match services to over 30 specialty entry level postgraduate training programs in Canada through two iterations. Unmatched applicants during the first iteration may participate in the second iteration, which consists of all unfilled seats following the first iteration. Non-identifiable data related to the match process for over 50 years is accessible on the CaRMS website. While studies have explored trends in ophthalmology match outcomes, , we found no studies conducting a comparative analysis of unmatch rates or related application behaviour data between surgical or other competitive specialties. The financial and emotional repercussions of going unmatched in any specialty can be severe, and in rare instances even lead to significant mental health challenges. Implementing evidence-informed residency application and selection processes, while a first step, may not decrease the rate of applicants going unmatched. Our research aims to determine if any disparities or trends exist in the match outcomes of ophthalmology applicants during the CaRMS process through a comparison to other competitive and surgical specialties. This research may identify disparities that inform targeted interventions to improve the match process for aspiring ophthalmologists. Study design We conducted a cross-sectional analysis of CaRMS data available on the residency match over the past 10 CaRMS cycles from 2013-2022. This study was exempted from requiring ethics approval by the University of British Columbia Behavioural Research Ethics Broad (BREB). Sampling methods We extracted data from the CaRMS R-1 “Data and reports” web page. Our data looks exclusively at Canadian Medical Graduate (CMG) applicants. We analyzed ophthalmology CaRMS data in comparison to both surgical disciplines (cardiac surgery, general surgery, neurosurgery, obstetrics and gynecology, orthopedic surgery, otolaryngology, plastic surgery, urology, and vascular surgery), as well as the top five most competitive non-ophthalmology disciplines, whether surgical or non-surgical. We defined the top five most competitive specialties as those with the highest ratio of applicants to quota number of seats available over the 2013-2022 study period: plastic surgery, dermatology, emergency medicine, otolaryngology, and urology. Programs that did not offer positions every cycle during the study period were excluded. Sample size We collected data on first choice applicants to ophthalmology (608), surgical specialties (5,153), and the top five most competitive specialties (3,092). Statistical analysis We applied the chi-square contingency test to analyze associations between first choice applicants to ophthalmology, surgical specialties, and the top five most competitive specialties and the following outcomes: going unmatched, ranking no other discipline, matching to an alternate discipline, and not applying to the second iteration of the CaRMS match process. We calculated absolute differences by subtracting the proportions at the end of the study period (2022) from the beginning (2013). We used the two-tailed Cochrane-Armitage trend test to assess the change in proportions over time. P-values less than 0.05 were considered statistically significant. All statistical analyses were conducted using R version 4.2.1. We conducted a cross-sectional analysis of CaRMS data available on the residency match over the past 10 CaRMS cycles from 2013-2022. This study was exempted from requiring ethics approval by the University of British Columbia Behavioural Research Ethics Broad (BREB). We extracted data from the CaRMS R-1 “Data and reports” web page. Our data looks exclusively at Canadian Medical Graduate (CMG) applicants. We analyzed ophthalmology CaRMS data in comparison to both surgical disciplines (cardiac surgery, general surgery, neurosurgery, obstetrics and gynecology, orthopedic surgery, otolaryngology, plastic surgery, urology, and vascular surgery), as well as the top five most competitive non-ophthalmology disciplines, whether surgical or non-surgical. We defined the top five most competitive specialties as those with the highest ratio of applicants to quota number of seats available over the 2013-2022 study period: plastic surgery, dermatology, emergency medicine, otolaryngology, and urology. Programs that did not offer positions every cycle during the study period were excluded. We collected data on first choice applicants to ophthalmology (608), surgical specialties (5,153), and the top five most competitive specialties (3,092). We applied the chi-square contingency test to analyze associations between first choice applicants to ophthalmology, surgical specialties, and the top five most competitive specialties and the following outcomes: going unmatched, ranking no other discipline, matching to an alternate discipline, and not applying to the second iteration of the CaRMS match process. We calculated absolute differences by subtracting the proportions at the end of the study period (2022) from the beginning (2013). We used the two-tailed Cochrane-Armitage trend test to assess the change in proportions over time. P-values less than 0.05 were considered statistically significant. All statistical analyses were conducted using R version 4.2.1. Over the study period, first choice ophthalmology applicants were more likely to go unmatched (18.9% [120/608]), than applicants to the top five most competitive (11.9% [371/3,092]) and surgical (13.5% [702/5,153]) specialties (χ2 = 26.23, p < 0.001). The proportion of first choice ophthalmology applicants going unmatched has significantly increased (8.3% v. 29.0%; absolute difference +20.7%; p = 0.002), while no significant increases were observed for the top five most competitive (9.3% v. 11.2%; absolute difference +1.9%; p = 0.81) and surgical (12.8% v. 16.4%; absolute difference +3.61%; p = 0.06) specialties from the 2013 to 2022 application cycle . The average proportion of first choice applicants ranking no alternate disciplines was more than twice as high for applicants to ophthalmology (31.8% [194/608]) than applicants to the top five most competitive specialties (13.2% [405/3,092]), but comparable to applicants to surgical specialties (32.17% [1,653/5,153]) (χ2 = 381.45, p < 0.001). The proportion of first choice ophthalmology applicants ranking no alternate disciplines has not significantly changed (29.2% v. 32.9%; absolute difference +3.7%; p = 0.86), while proportions have significantly decreased for the top five most competitive (20.4% v. 8.6%; absolute difference -11.8%; p < 0.001) and surgical (38.3% v. 27.2%; absolute difference -11.1%; p < 0.001) specialties over the study period, as plotted in . Over the study period, the proportion of first choice applicants matching to an alternate discipline during the first iteration was 18.9% (120/608) for ophthalmology, 31.2% (967/3,092) for the top five most competitive specialties, and 17.95% (928/5,153) for surgical specialties (χ2 = 196.82, p < 0.001). Interestingly, the match rate to alternate disciplines in the first iteration is highest for ophthalmology applicants (0.41), followed by applicants to the top five most competitive (0.36) and surgical (0.27) specialties (χ2 = 64.82, p < 0.001). The average proportion of first choice applicants who went unmatched during the first iteration and subsequently did not apply to the second iteration was 13.8% (48/347) for ophthalmology, 3.2% (49/1,547) for the top five most competitive specialties, and 5.2% (136/2,624) for surgical specialties (χ2 = 67.45, p < 0.001). illustrates the opt-out rate during the second iteration, as a percentage of unmatched first choice applicants following the first iteration (i.e., as opposed to the opt-out rate of total first choice applicants). Applicants to ophthalmology exhibit a second iteration opt-out rate of 57.8%, compared to 35.1% and 24.9% for the top five most competitive specialties and surgical specialties respectively (χ2 = 86.73, p < 0.001). The second iteration opt-out rate has increased more (46.2% v. 68.2%; absolute difference +22%; p = 0.30) for ophthalmology applicants, compared to applicants to the top five most competitive specialties (16.7% v. 31.4%; absolute difference +14.7%; p = 0.19), and surgical specialties (31.6% v. 39.3%; absolute difference +7.7%; p = 0.19), as shown in . This study presents compelling evidence of ophthalmology applicants going unmatched at a higher rate than other competitive or surgical specialties. While supports exist for unmatched applicants, going unmatched results in tremendous anxiety and career uncertainty. , Despite this, applicants to ophthalmology are less likely than applicants to other surgical or competitive specialties to rank alternate disciplines, resulting in a lesser proportion of ophthalmology applicants matching to alternate disciplines. When ophthalmology applicants ranked alternate disciplines, the success rate was higher than in both comparison groups. Applicants to ophthalmology were also more likely than comparison groups to not participate in the second iteration after going unmatched in the first iteration, with over half choosing not to participate in the second iteration. These applicant behaviours, characterized by inflexibility, ultimately contribute to the risk of remaining unmatched throughout the entire CaRMS process and are less frequent among applicants to other surgical or competitive specialties. The exact reasons for this phenomenon are unknown, and further qualitative studies involving applicants may offer valuable insights into why applicants to ophthalmology are less inclined to parallel plan compared to their peers. One possible contributing factor is lack of adequate planning, which may lead applicants to completely forgo parallel planning. In this regard, it would be beneficial for UGME stakeholders to explore the effectiveness of pre-clerkship workshops in providing practical strategies to support parallel planning, as suggested by the Association of Faculties of Medicine of Canada (AFMC). Other factors contributing to going unmatched may include not being competitive in a first-choice specialty, lack of preparedness for interviews, or not ranking enough programs. The AFMC Student Elective Diversification Policy promotes parallel planning by limiting students to eight weeks of electives in any single entry-level discipline. A study at the University of British Columbia examining elective diversification and match rates found an unclear correlation between elective diversification and match outcomes, suggesting that “…a viable back-up plan may reside in the application as a whole, rather than solely in the elective selection process.” Research into whether the Student Elective Diversification Policy has led to increased parallel planning may provide insight into the efficacy of these policies. No studies to our knowledge have explored effective strategies for promoting parallel planning at the Post-Graduate Medical Education (PGME) level. Possible strategies include requiring at least one reference letter from a specialty other than their own, and valuing research in unrelated disciplines equally. Implementation of these strategies, if shown to be effective, would require coordinated efforts at the PGME level. This study is limited to reporting findings based solely on CaRMS data and does not suggest insight into the individual motivations of applicants. The reasons behind the higher proportions of ophthalmology applicants not ranking other specialties during the first iteration and dropping out during the second iteration compared to their peers remains unclear. Further research is needed to better understand motivations for ophthalmology applicant behaviour in comparison to other applicant groups. This research should involve both past and prospective applicants to ophthalmology and other competitive specialty programs. First choice ophthalmology applicants have higher rates of going unmatched in the CaRMS application process. This can be attributed, at least in part, to ophthalmology applicants being less likely to rank alternate disciplines and choosing to not participate in the second iteration. Additional research is needed to explore ophthalmology applicant behaviours and gain a deeper understanding of our study’s findings. |
“An invitation to think differently”: a narrative medicine intervention using books and films to stimulate medical students’ reflection and patient-centeredness | 16a66ce6-e31e-4e6a-a978-a07a31208875 | 10416442 | Patient-Centered Care[mh] | There is an increased recognition of the importance of patient preferences, the social and cultural contexts of care, and patients’ individual life stories . Indeed, patients who feel like active participants in their care trajectories via patient-centered care approaches, like shared-decision making, also experience better health outcomes . Scholl et al. systematically mapped the concept of patient-centeredness and identified underlying factors that can optimize patient-centered care . These factors include investing in the clinician-patient relationship, acknowledging the uniqueness of each patient’s lived experiences, and acknowledging biopsychosocial factors. Clinicians’ interpersonal skills, such as empathy, compassion, trustworthiness and self-reflectiveness, can also contribute to patient-centered care . Sandars adds that clinicians’ awareness of their own underlying beliefs and values can positively impact the clinician-patient relationship. This relationship between clinicians’ self-awareness and empathic healthcare has also been studied by Dasgupta et al. using reflective writing. As they quote in their study: “It takes a whole doctor to treat a whole patient” . Reflective writing is a powerful tool that has been extensively used to promote (future) clinicians’ professional development, self-understanding and sense of connection to their colleagues and patient communities . However, stimulating the development of medical students’ sense of empathy, compassion and self-awareness remains challenging. Veen et al. describe these attitudes as private experiences, which are difficult to train or articulate in classroom settings. In another article, they note that requiring students to map or assess their own development in interpersonal skills (such as empathy, compassion and self-awareness) via required writing assignments runs the risk of eliciting inauthentic or ‘zombie-like’ self-reflections. This is further exacerbated by educators’ attempts to transform highly individualized and varied interpersonal skills into uniform, measurable learning outcomes . Narrative medicine (NM) was founded at Columbia University and has been proposed as a teaching model for patient-centered medical practice by fostering attentive listening and clinician-patient affiliation [ , , ]. Narrative competence, according to the NM founders, constitutes “the ability to acknowledge, absorb, interpret and act on the stories and plights of others” . NM as a pedagogic strategy employs different art forms, close reading exercises and creative writing . It provides students with a broader perspective on the experience of illness, which they can use in their interactions with patients. It also helps students better understand their own life journey, which in turn can help them recognise their own feelings and emotions during patient interactions and can eventually lead to more authentic engagements with patients . So far, multiple studies have described the beneficial effects of NM as a pedagogical tool for medical students, with a positive impact on communication skills, empathy, self-reflection, and relationship-building [ – ]. However, most of these studies report on small study groups predominantly in elective courses, which leaves the efficacy of NM in larger study groups largely unknown [ – ]. The aim of this study was to explore if a mandatory narrative medicine lesson could stimulate meaningful self-reflection and (themes related to) patient-centeredness in medical students.
Setting In this mandatory exercise, students read a book, watched a film and discussed these art forms in small groups (Table ). First, during the preparatory lecture, students received a brief introduction about the main tenants and goals of NM. Second, students were assigned to one of three book and film pairings . Each pairing related to one of the specialties of their current clerckships (neurology, psychiatry and geriatrics). The pairing ‘The doctor as a patient’ concerned the illness experience of physicans getting a life-changing neurological disease, the pairing ‘The mysterious brain’ concerned the lived experience of persons with autism spectrum disorder and their relatives, and the pairing ‘Until death do us part’ concerned the experiences of partners of elderly patients during the last phase of their lives. Students were asked to read the book and watch the film individually during a span of 4 weeks. As preparation for the follow-up group discussion, they were asked to select a fragment from the book or the film that impacted them and to compile their own questions and comments to share with the group during the discussion. Finally, during the 45-min small group discussion (max. 6 students), students discussed the art forms under the guidance of faculty members from the department of Global Public Health, Bioethics and Health Humanities. At the end of the session, students completed a written reflection exercise and shared their answers with the group; written responses were collected by faculty members at the end of the lessons. Faculty members all attended an instruction session led by one of the co-authors (M.M.) and were given a teacher manual with theoretical background information about the lesson, sample discussion questions, and the suggested lesson structure (Table ). Study participants and lesson context All fourth-year medical students at the University Medical Center Utrecht (UMCU) who attended the NM lesson between October 2018 and March 2020 were included in this study. Exclusion criteria were: students whose handwritten reflections were not readable, students who did not complete the written reflection exercise or essays that were not submitted to the lesson coordinator by the teacher. The NM lesson is a mandatory part of the longitudinal ‘Patient Perspectives Program’ of the UMCU medical curriculum and takes place during students’ longitudinal clerkship of neurology, psychiatry and geriatrics. Data analysis A mixed-methods design was used analyze students’ written reflections because we wanted to study the content as well as the quality of the reflections. First, essays were thematically analyzed in NVivo 12 using an inductive approach . Two researchers (E.L. and M.M.) independently coded a sample of 25 essays of each book and film pairing for recurrent themes and concepts. Codes and themes were compared, discussed and combined to form the preliminary codebook. Subsequently, both researchers applied this codebook to 10 essays from each of the three pairings. They discussed their findings and revised and added new codes. After reaching agreement the first author (E.L.) used the final codebook to code all essays. Students’ feedback and recommendations were included in the thematic analysis to get insight into the lesson experience as well. Second, a quantitative analysis was conducted to assess the level of students’ reflections using a scoring-system based on ‘The Reflection Evaluation For Learners’ Enhanced Competencies Tool’ (REFLECT) . The REFLECT rubric was originially designed for formative assessment, but was chosen for this study as a means of gaining a more objective sense of the quality of students’ reflection. For this study, a simplified version of the REFLECT rubric was created; essays were scored in their entirety on a scale from 1-4 instead of scoring each criterion in the original REFLECT rubric (Table , criteria). This adaptation was necessary because we evaluated short reflections, whereas the REFLECT rubric is designed to assess longer reflections. The scores 1-4 respectively represent habitual action (1), thoughtful action or introspection (2), reflection (3) and critical reflection (4). All authors assessed the level of reflection in the student essays using the simplified REFLECT rubric. To reach consensus about the scoring system, all authors scored a sample of 25 essays from each of the three pairings. Differences in scoring were discussed and the scoring system was refined. All authors then scored an additional sample of 10 essays for each pairing. After meeting again to discuss differences in scores, the scoring system was finalized. Then, the authors independently scored all 203 essays and the two-way random-effects intraclass correlation coefficient model was performed to assess the agreement between the three raters, . Analysis showed that there was a good agreement between the authors with an intraclass correlation of 0.96 (95%CI 0.94-0.97), Where authors scored essays differently, the median score was used as final score. The data was not normally distributed, therefore the median reflection scores were used. To compare scores between the book-film pairings, we used the Kruskall Wallis test to calculate the p-value (since our data was not normally distributed and ordinal). The p-value demonstrates the probability that differences in scores between the book-film pairings were based on coincidence. SPSS 27.0 was used to conduct these quantitative analyses.
In this mandatory exercise, students read a book, watched a film and discussed these art forms in small groups (Table ). First, during the preparatory lecture, students received a brief introduction about the main tenants and goals of NM. Second, students were assigned to one of three book and film pairings . Each pairing related to one of the specialties of their current clerckships (neurology, psychiatry and geriatrics). The pairing ‘The doctor as a patient’ concerned the illness experience of physicans getting a life-changing neurological disease, the pairing ‘The mysterious brain’ concerned the lived experience of persons with autism spectrum disorder and their relatives, and the pairing ‘Until death do us part’ concerned the experiences of partners of elderly patients during the last phase of their lives. Students were asked to read the book and watch the film individually during a span of 4 weeks. As preparation for the follow-up group discussion, they were asked to select a fragment from the book or the film that impacted them and to compile their own questions and comments to share with the group during the discussion. Finally, during the 45-min small group discussion (max. 6 students), students discussed the art forms under the guidance of faculty members from the department of Global Public Health, Bioethics and Health Humanities. At the end of the session, students completed a written reflection exercise and shared their answers with the group; written responses were collected by faculty members at the end of the lessons. Faculty members all attended an instruction session led by one of the co-authors (M.M.) and were given a teacher manual with theoretical background information about the lesson, sample discussion questions, and the suggested lesson structure (Table ).
All fourth-year medical students at the University Medical Center Utrecht (UMCU) who attended the NM lesson between October 2018 and March 2020 were included in this study. Exclusion criteria were: students whose handwritten reflections were not readable, students who did not complete the written reflection exercise or essays that were not submitted to the lesson coordinator by the teacher. The NM lesson is a mandatory part of the longitudinal ‘Patient Perspectives Program’ of the UMCU medical curriculum and takes place during students’ longitudinal clerkship of neurology, psychiatry and geriatrics.
A mixed-methods design was used analyze students’ written reflections because we wanted to study the content as well as the quality of the reflections. First, essays were thematically analyzed in NVivo 12 using an inductive approach . Two researchers (E.L. and M.M.) independently coded a sample of 25 essays of each book and film pairing for recurrent themes and concepts. Codes and themes were compared, discussed and combined to form the preliminary codebook. Subsequently, both researchers applied this codebook to 10 essays from each of the three pairings. They discussed their findings and revised and added new codes. After reaching agreement the first author (E.L.) used the final codebook to code all essays. Students’ feedback and recommendations were included in the thematic analysis to get insight into the lesson experience as well. Second, a quantitative analysis was conducted to assess the level of students’ reflections using a scoring-system based on ‘The Reflection Evaluation For Learners’ Enhanced Competencies Tool’ (REFLECT) . The REFLECT rubric was originially designed for formative assessment, but was chosen for this study as a means of gaining a more objective sense of the quality of students’ reflection. For this study, a simplified version of the REFLECT rubric was created; essays were scored in their entirety on a scale from 1-4 instead of scoring each criterion in the original REFLECT rubric (Table , criteria). This adaptation was necessary because we evaluated short reflections, whereas the REFLECT rubric is designed to assess longer reflections. The scores 1-4 respectively represent habitual action (1), thoughtful action or introspection (2), reflection (3) and critical reflection (4). All authors assessed the level of reflection in the student essays using the simplified REFLECT rubric. To reach consensus about the scoring system, all authors scored a sample of 25 essays from each of the three pairings. Differences in scoring were discussed and the scoring system was refined. All authors then scored an additional sample of 10 essays for each pairing. After meeting again to discuss differences in scores, the scoring system was finalized. Then, the authors independently scored all 203 essays and the two-way random-effects intraclass correlation coefficient model was performed to assess the agreement between the three raters, . Analysis showed that there was a good agreement between the authors with an intraclass correlation of 0.96 (95%CI 0.94-0.97), Where authors scored essays differently, the median score was used as final score. The data was not normally distributed, therefore the median reflection scores were used. To compare scores between the book-film pairings, we used the Kruskall Wallis test to calculate the p-value (since our data was not normally distributed and ordinal). The p-value demonstrates the probability that differences in scores between the book-film pairings were based on coincidence. SPSS 27.0 was used to conduct these quantitative analyses.
345 students followed the NM exercise. Students were excluded because their handwritten reflection was unreadable, because they had not written or submitted a reflection, or the reflections had been misplaced by the teacher (n = 136). This left 203 students’ essays from the reflective writing exercise (‘The doctor as a patient’ n = 80; ‘The mysterious brain’ n = 84; ‘Until death do us part’ n = 39). Qualitative analysis of students’ reflections Reflection on a professional level Students reflected on their role as a healthcare professional (HCP). They reflected on their current role as clinical interns and linked the lesson material to their own experiences in their internships. For example, one student (quote 1, Table ) reflected on her own actions in the clinic and reported being more aware of the consequences of these actions after completing the NM assignment. Students also set intentions for their role as a future HCP. Many of these students linked a lesson they had learned from the exercise to what they considered ‘good healthcare’ in general or to what they intended to include into their future practice (quote 2, Table ) . Other students articulated more specific intentions, like communication strategies they intended to use in future interactions with (similar) patients (quote 3, Table ) or possible care options for patients similar to the ones depicted in the art forms. Students also described the importance of considering more than treatment options alone (quote 4, Table ) , and some students even connected these intentions to shared decision-making practices. Reflection on a personal level Students described lessons they had learned that had an impact on their personal worldview or beliefs. They reflected on health, illness and death and connected this to their opinion of a ‘meaningful life’. They thought about what made them happy and mentioned the importance of finding a balance between work and private life (quote 5, Table ). In addition, some students reported gaining more insight into their own biases and stereotypes, for example about psychiatric patients (quote 6, Table ). Attention to illness experience and individual patient story Students mentioned that the stories in the book and film pairings helped them understand more about illness experiences or perspectives of patients. They expressed gaining more insight into the problems or dilemmas patients face. For example, a student (quote 7, Table ) mentioned a new awareness of a patient’s vulnerability. Many students also noticed the individual nature of each patient’s story and life experiences, resulting in different views, wishes, and beliefs. They also mentioned how important it is to be aware of this as a HCP (quote 8, Table ), and frequently linked this to future intentions . Furthermore, some students reported an awareness of how their own life story and experiences influenced their worldview, which they described as a gap between themselves and their patients (quote 9, Table ). Some also expressed awareness of their own limitations in understanding people’s feelings. Topics related to specific book/film pairings The themes mentioned above were found in all three book and film pairings. We also found themes which were related to specific book and film combinations. In the pairing ‘The doctor as a patient’, some students expressed a new awareness of the possibility of getting sick or dying as a HCP themselves. As illustrated by one student (quote 10, Table ) , they expressed the distance they normally felt between their own health and that of the patients they worked with every day. For the pairing ‘The mysterious brain’, students mentioned gaining more insight into a specific disease—autism spectrum—as well as the workings of an autistic brain (quote 11, Table ) . This new knowledge, according to their reflections, provided a new perspective to what they had learned in the standard curriculum. In the third pairing, ‘Till death do us part’, students expressed more understanding of the role of the support system around the patient. As one student wrote (quote 12, Table ) , the art forms illuminated how burdensome it could be for family members and loved ones to care for someone ill. We also found this theme in ‘The mysterious brain’, where students mentioned the importance of a good support system for optimal treatment outcomes. Quantitative analysis of students’ level of reflection Table shows the distribution of reflection levels and representatives quotes. Approximately half of the students showed an in-depth reflection (score of 3 or 4) These students made a connection between what they read or saw and their own experiences, values, or beliefs. They demonstrated a process of analysis and meaning making. Their reflections furthermore displayed a distinct and authentic writerly voice. Some of these students also demonstrated critical reflective skills by questioning their own norms and values and by trying to articulate a deeper understanding of the dilemmas illustrated in the book and film. The other half of the students used more descriptive writing in their essays (score of 1 or 2). In such essays, students only showed habitual actions, meaning they described general statements and abstract lessons. Others wrote more elaborative descriptions, but they still didn’t connect this to their own experiences. These reflections remained superficial as there was little or no meaning-making. Also, in many of these essays, there was a lack of an authentic or distinct writerly voice. The median level of reflection seen in the students’ essays was 2 ( n = 203; IQR 2-3). The reflection level differed between the book and film pairings ( p < 0.01), with a mean score of 3 ( n = 79; IQR 2-4) for ‘The doctor as a patient’, 2 ( n = 83; IQR 1-3) for ‘The mysterious brain’ and 2 ( n = 39; IQR = 1-2) for ‘Till death do us part’. Students’ experience of the NM lesson Students appreciated the artforms and used terms like ‘enjoyable’ and ‘interesting’ to describe the assignment (quote 1, Table ). Learning about illness in a different way gave them new insights in comparison to the standard curriculum (quote 2, Table ). Students also mentioned that the discussion with their fellow students enriched the learning process (quote 3, Table ). Furthermore, a number of students mentioned the supportive classroom atmosphere, a combination of the small group setting and the guidance of the teacher (quote 4, Table ). Most of the suggestions for improvement pertained to practical issues related to scheduling and the availability of the lesson materials (quote 5, Table ). Some students were critical to the choice of the art forms, especially in the pairing ‘Till death do us part’. A few students considered the time investment of the assignment too great.
Reflection on a professional level Students reflected on their role as a healthcare professional (HCP). They reflected on their current role as clinical interns and linked the lesson material to their own experiences in their internships. For example, one student (quote 1, Table ) reflected on her own actions in the clinic and reported being more aware of the consequences of these actions after completing the NM assignment. Students also set intentions for their role as a future HCP. Many of these students linked a lesson they had learned from the exercise to what they considered ‘good healthcare’ in general or to what they intended to include into their future practice (quote 2, Table ) . Other students articulated more specific intentions, like communication strategies they intended to use in future interactions with (similar) patients (quote 3, Table ) or possible care options for patients similar to the ones depicted in the art forms. Students also described the importance of considering more than treatment options alone (quote 4, Table ) , and some students even connected these intentions to shared decision-making practices. Reflection on a personal level Students described lessons they had learned that had an impact on their personal worldview or beliefs. They reflected on health, illness and death and connected this to their opinion of a ‘meaningful life’. They thought about what made them happy and mentioned the importance of finding a balance between work and private life (quote 5, Table ). In addition, some students reported gaining more insight into their own biases and stereotypes, for example about psychiatric patients (quote 6, Table ). Attention to illness experience and individual patient story Students mentioned that the stories in the book and film pairings helped them understand more about illness experiences or perspectives of patients. They expressed gaining more insight into the problems or dilemmas patients face. For example, a student (quote 7, Table ) mentioned a new awareness of a patient’s vulnerability. Many students also noticed the individual nature of each patient’s story and life experiences, resulting in different views, wishes, and beliefs. They also mentioned how important it is to be aware of this as a HCP (quote 8, Table ), and frequently linked this to future intentions . Furthermore, some students reported an awareness of how their own life story and experiences influenced their worldview, which they described as a gap between themselves and their patients (quote 9, Table ). Some also expressed awareness of their own limitations in understanding people’s feelings. Topics related to specific book/film pairings The themes mentioned above were found in all three book and film pairings. We also found themes which were related to specific book and film combinations. In the pairing ‘The doctor as a patient’, some students expressed a new awareness of the possibility of getting sick or dying as a HCP themselves. As illustrated by one student (quote 10, Table ) , they expressed the distance they normally felt between their own health and that of the patients they worked with every day. For the pairing ‘The mysterious brain’, students mentioned gaining more insight into a specific disease—autism spectrum—as well as the workings of an autistic brain (quote 11, Table ) . This new knowledge, according to their reflections, provided a new perspective to what they had learned in the standard curriculum. In the third pairing, ‘Till death do us part’, students expressed more understanding of the role of the support system around the patient. As one student wrote (quote 12, Table ) , the art forms illuminated how burdensome it could be for family members and loved ones to care for someone ill. We also found this theme in ‘The mysterious brain’, where students mentioned the importance of a good support system for optimal treatment outcomes.
Students reflected on their role as a healthcare professional (HCP). They reflected on their current role as clinical interns and linked the lesson material to their own experiences in their internships. For example, one student (quote 1, Table ) reflected on her own actions in the clinic and reported being more aware of the consequences of these actions after completing the NM assignment. Students also set intentions for their role as a future HCP. Many of these students linked a lesson they had learned from the exercise to what they considered ‘good healthcare’ in general or to what they intended to include into their future practice (quote 2, Table ) . Other students articulated more specific intentions, like communication strategies they intended to use in future interactions with (similar) patients (quote 3, Table ) or possible care options for patients similar to the ones depicted in the art forms. Students also described the importance of considering more than treatment options alone (quote 4, Table ) , and some students even connected these intentions to shared decision-making practices.
Students described lessons they had learned that had an impact on their personal worldview or beliefs. They reflected on health, illness and death and connected this to their opinion of a ‘meaningful life’. They thought about what made them happy and mentioned the importance of finding a balance between work and private life (quote 5, Table ). In addition, some students reported gaining more insight into their own biases and stereotypes, for example about psychiatric patients (quote 6, Table ).
Students mentioned that the stories in the book and film pairings helped them understand more about illness experiences or perspectives of patients. They expressed gaining more insight into the problems or dilemmas patients face. For example, a student (quote 7, Table ) mentioned a new awareness of a patient’s vulnerability. Many students also noticed the individual nature of each patient’s story and life experiences, resulting in different views, wishes, and beliefs. They also mentioned how important it is to be aware of this as a HCP (quote 8, Table ), and frequently linked this to future intentions . Furthermore, some students reported an awareness of how their own life story and experiences influenced their worldview, which they described as a gap between themselves and their patients (quote 9, Table ). Some also expressed awareness of their own limitations in understanding people’s feelings.
The themes mentioned above were found in all three book and film pairings. We also found themes which were related to specific book and film combinations. In the pairing ‘The doctor as a patient’, some students expressed a new awareness of the possibility of getting sick or dying as a HCP themselves. As illustrated by one student (quote 10, Table ) , they expressed the distance they normally felt between their own health and that of the patients they worked with every day. For the pairing ‘The mysterious brain’, students mentioned gaining more insight into a specific disease—autism spectrum—as well as the workings of an autistic brain (quote 11, Table ) . This new knowledge, according to their reflections, provided a new perspective to what they had learned in the standard curriculum. In the third pairing, ‘Till death do us part’, students expressed more understanding of the role of the support system around the patient. As one student wrote (quote 12, Table ) , the art forms illuminated how burdensome it could be for family members and loved ones to care for someone ill. We also found this theme in ‘The mysterious brain’, where students mentioned the importance of a good support system for optimal treatment outcomes.
Table shows the distribution of reflection levels and representatives quotes. Approximately half of the students showed an in-depth reflection (score of 3 or 4) These students made a connection between what they read or saw and their own experiences, values, or beliefs. They demonstrated a process of analysis and meaning making. Their reflections furthermore displayed a distinct and authentic writerly voice. Some of these students also demonstrated critical reflective skills by questioning their own norms and values and by trying to articulate a deeper understanding of the dilemmas illustrated in the book and film. The other half of the students used more descriptive writing in their essays (score of 1 or 2). In such essays, students only showed habitual actions, meaning they described general statements and abstract lessons. Others wrote more elaborative descriptions, but they still didn’t connect this to their own experiences. These reflections remained superficial as there was little or no meaning-making. Also, in many of these essays, there was a lack of an authentic or distinct writerly voice. The median level of reflection seen in the students’ essays was 2 ( n = 203; IQR 2-3). The reflection level differed between the book and film pairings ( p < 0.01), with a mean score of 3 ( n = 79; IQR 2-4) for ‘The doctor as a patient’, 2 ( n = 83; IQR 1-3) for ‘The mysterious brain’ and 2 ( n = 39; IQR = 1-2) for ‘Till death do us part’.
Students appreciated the artforms and used terms like ‘enjoyable’ and ‘interesting’ to describe the assignment (quote 1, Table ). Learning about illness in a different way gave them new insights in comparison to the standard curriculum (quote 2, Table ). Students also mentioned that the discussion with their fellow students enriched the learning process (quote 3, Table ). Furthermore, a number of students mentioned the supportive classroom atmosphere, a combination of the small group setting and the guidance of the teacher (quote 4, Table ). Most of the suggestions for improvement pertained to practical issues related to scheduling and the availability of the lesson materials (quote 5, Table ). Some students were critical to the choice of the art forms, especially in the pairing ‘Till death do us part’. A few students considered the time investment of the assignment too great.
This study showed that a mandatory narrative medicine intervention, where students read a book and watched a film, led to reflection on themes related to patient-centeredness in a large study sample of medical students. The role of narrative medicine in the medical curriculum has been studied in the past, and the themes we found in this research correspond with other findings on this subject, namely participant satisfaction, perspective taking and self-reflection [ – ]. However, most of these studies included a small study population, predominantly in elective courses. We were able to validate these positive results in a larger population. More importantly, we’ve explored a mandatory narrative medicine lesson for all fourth-year medical students and not just for students with a predisposition for this field. Furthermore, this is the first study that combined a book and film in a narrative medicine exercise. By using two different media we aimed to interest students who might feel some resistance toward reading, and more broadly, to narrative medicine activities. As Arntfield et al. mention, humanities-based programs are sometimes viewed as ‘counter-culture’, especially by those unfamiliar with the field. Also, by using two different media with the same topic, we encouraged students to think about similarities and differences between these two perspectives. Besides that, we demonstrated that the different book-film pairings taught students overarching lessons as well as lessons specific to the thematic pairings. This can be further optimised in teaching targeted lessons throughout the medical curriculum. The content of students’ reflections aligns with underlying principles of patient-centered care as described by Scholl et al. : “the clinician-patient relationship”, “patient as a unique person” and “essential characteristics of the clinician”. Regarding the clinician-patient relationship, some students mentioned communication strategies they could use to create a deeper connection with their patients in general and with specific patient groups. Others gained more insight into their own attitudes and how this could influence the relationship. In addition, several students gained more awareness of the individual illness experience, in other words, the unique nature of each patient’s story. They mentioned the individual perspectives and priorities of patients and how personalised treatment can complement these values. Various students also demonstrated relevant clinician characteristics related to patient-centeredness. They showed empathy and compassion towards the characters depicted in the artforms and were able to connect these narratives to the patients they encountered in their clinical internships. Additionally, this narrative exercise stimulated meaningful self-reflection; another clinician characteristic related to patient-centered care. Even though this was only a short reflective writing exercise, approximately half of the students showed an in-depth reflection. This is probably an underestimation of the amount of students who actually reached in-depth reflection, but did not write this down during the brief writing exercise. Furthermore, a major part of the self-reflection took place during the group discussion itself. The choice of book and film pairing also influenced the level of reflection in the essays. Students who were assigned to the pairing ‘The doctor as a patient’ showed more reflection compared to students who were assigned to other pairings. We hypothise that the theme ‘The doctor as a patient’ related most to students’ personal and professional lives. A perceived affinity with the characters in the artforms can be correlated with more identification with these characters. This process of identification is known to influence the beliefs and values of the readers . Other factors might also have played a role in the richness of the essays. For instance, it is known that teacher qualities can influence students’ learning motivations and academic performance . Variations in teachers’ explanation of the writing exercise as well as their verbal and non-verbal communication during the lesson might have influenced students’ reflections and observations. This is a topic that deserves future research. While this study showed positive effects on students’ patient-centeredness, we do not know to what extent this lesson ultimately impacts students’ reflective capacity and patient-centeredness in the clinical setting. This is due to the study design and the short nature of the NM intervention. Also, faculty members were free to change elements of their lessons to their own liking. As a result, we noticed variations in length and structure of the writing exercises; some faculty members didn’t even ask students to write down their reflections. A significant amount of students were excluded from this study because their essays were illegible, they didn’t complete the assignment, or their teachers didn’t submit the essays them properly. These teacher and student factors could have led to a positive selection bias of the results. Because it was a short, guided reflection, some students might have given the obvious or ‘desired’ answers based on other medical school lecture topics about shared decision-making and patient participation . However, the analysis of students’ written reflections revealed a variety of topics and themes that were broader than the guiding questions.
In conclusion, this narrative medicine lesson at the UMCU facilitated reflection on multiple aspects of patient-centeredness among fourth-year medical students. This research underlines the value of narrative medicine in the standard medical curriculum by validating various purported outcomes (for instance, contributing to patient-centeredness) in a larger study population and a mandatory course. Going forward, NM exercises should be more thoroughly integrated in the medical curricula to provide students with more continuity during their education and to teach them long-term skills for their future careers. Future research should focus on how students transfer these skills into clinical practice and how this can be optimized.
|
Non-pharmaceutical Interventions and the Infodemic on Twitter: Lessons Learned from Italy during the Covid-19 Pandemic | 3165915a-b80f-46c8-9fab-87916c5f44b9 | 7936238 | Health Communication[mh] | Since it emerged as a global threat in early 2020, the COVID-19 pandemic has affected health, human functioning and society on an unprecedented scale. The global spread of the virus in the absence of vaccines and effective treatments demonstrates the importance of effectively using non-pharmaceutical intervention (NPI) such as social distancing to reduce transmission of the virus, limit mortality and avoid overwhelming local healthcare systems . Two strategies were used in most nations: quarantine of infected persons and social distancing to mitigate the spread of the virus . Effective implementation of containment and social distancing strategies requires social trust, given the threat of massive disruption to society and the economy . In response to the rapid spread of COVID-19, many nations mandated all but essential businesses to be shuttered and for individuals to “shelter in place” to reduce the risk of transmission of the highly contagious virus. In Italy, as one of the first countries to be severely hit by the wave, the “#I-stay-home” campaign obliged citizens to avoid leaving their homes. This effort and similar programs in other nations require trust and public consensus, to engage a nation’s citizens as active co-participants in their own and their fellow citizen’s health and well-being . At the time of submission, almost 3 million cases of infection and nearly 100,000 COVID-19 related deaths had occurred in Italy. The effectiveness of measures such as social distancing to reduce the spread of the virus depends on the level of social trust and collection societal action that is supported by integration among the key groups such as citizens, institutions, information providers and elected officials . Artificial dichotomies between the need to contain the spread of the virus and the need to maintain the health of the economy, conflicting themes in public and social media, and lack of a unified message can undermine the citizen buy-in, social trust, public compliance, and the speed and effectiveness of implementation. Social trust and precise messaging are key in the current efforts to address an unprecedented challenge to the healthcare systems of nations. They are needed to inform public perceptions and contribute to a developing regional or national consensus that helps leaders and policymakers to coordinate transparent and consensus-based efforts to adopt of country-wide social distancing measures such as closing schools, banning mass gatherings, and isolating individuals with the virus and their contacts. These efforts were shown to be effective in containing the spread of the Spanish Flu in 1918 . In this paper, we explore the content and messages in social media communications during the early stages of the spread of the COVID-19 virus in Italy, which numbers are reported in Appendix 1 (Table ). The aim is to better understand how social media dialogue can affect and be used strategically in the adoption of large-scale regional and national social distancing measures to prevent the spread of the virus.
NPI The World Health Organization Influenza Pandemic Plan of 1999 puts considerable attention on the role of non-pharmaceutical public health interventions to contain or delay the spread of a new influenza virus . NPI include early case isolation, social distancing using face masks, closing of schools and businesses shot-down . The application of NPI proved to reduce the spread of the COVID-19 virus in several areas inside China . However, to be effective, NPI requires authorities to agree in advance on a range of containment strategies, the population be informed and willing to adopt the necessary measures . Analyzing the NPI applied during the influenza pandemic of 1918 Whitelaw wrote: “To sum up, it is evident, that no public health law, which has not the endorsation and support of the public generally, can ever be reasonably well enforced.” More recently, the WHO wrote: “Some of the lessons learned from the 2003 severe acute respiratory syndrome (SARS) epidemic can be applied to influenza, including the success of public campaigns to encourage self-recognition of illness, telephone hotlines providing medical advice, and early isolation when potential patients seek health care.” Several variables have proved necessary to get public endorsement for the application of NPI such as the perceived risk, severity of the consequences as well as response efficacy of the adopted measures . Therefore, while NPI has proved to be effective in limiting the spread of a pandemic, there must be a public endorsement of their employability. Giving people the right information is essential to empower them to evaluate their risks and the importance of curtailing their freedoms in terms of virus spread limitation. Emergency management and social media communication The development of social media has changed the communication both in terms of information availability and flow. Collaborative generation and dissemination activities of several types of content are some of the most critical distinct features of social media. According to Brynielsson et al. “Within the field of crisis communication, social media possibilities such as online sharing and social networking have had an impact on the way crisis information is disseminated and updated.” Among the many social media, Twitter has been widely used in emergency management literature due to its specific features. For example, Twitter allows to post comments visible to all audience but also directly targeting a specific audience due to the mention and reply function . The hashtag feature might help support the rapid building of an issue around specific community problems or geographical areas . Research on emergency management shows that Twitter has been used to improve situational awareness among communities . It can inform local communities given emergency alerts , and can act as a tool to facilitate social and political trends for change during emergencies when emotions embolden people . Despite its great potentiality, due to the unchecked and socially constructed nature, messages shared on Twitter might lead to disinformation contributing to the infodemic problem . For example, Panagiotopoulos et al. discuss the social amplification or reduction of risks that on the one hand might be caused due to the Twitter flow and that on the other hand could be monitored by those responsible for risk management. Similarly, Surian et al. used Twitter discussions about human papillomavirus vaccines for clustering opinions and detecting risks for public health. The COVID-19 emergency and the Infodemic The COVID-19 is a global emergency “which started in Wuhan in China in early December 2019, brought into the notice of the authorities in late December, early January 2020, and, after investigation, was declared as an emergency in the third week of January 2020” . At the time we are writing the COVID-19 has killed almost 2.5 million people worldwide. However, just a few months earlier, the nature and danger of the virus were hotly contested. The US Surgeon General, Jerome Adams tweeted on February 1st, 2020 “Roses are red, violets are blue, risk is low for #coronarvirus, but high for the #flu” . On March 9th, 2020 the US President Donald Trump tweeted: “Last year 37,000 Americans died from the common Flu. Nothing is shut down, life and the economy go on... Think about that” . When it became clear that the situation was much worse, and commenting on his previous statements on Twitter he later said: “circumstances change but it was a true statement at the time it was made” . Therefore, the COVID-19 emergency differs from other emergencies as knowledge of the real risks was mainly unknown or at least debated at the early stages of development of the pandemic. The development of the COVID-19 pandemic demonstrates the spread of fake news, false information based on non-checked facts . In March 2020, a pool developed by YouGov and the Economist revealed that 13% of Americans believed the COVID-19 crisis a hoax, while even world leaders’ social media posts had to be deleted for spreading misinformation about the Coronavirus . The development of false and unchecked information, recently named infodemic during the COVID-19 emergency is peculiar compared to other crisis. The limited scientific knowledge available and the lack of developing consensus among the population increased the initial spread of the virus due to the specific nature of the NPI required. Twitter has proved to be “the dominant social reporting tool to spread information on social crises” . Previous studies employing crisis and emergency risk communication models are based on the monitoring of the risks and the communication of warnings to avoid social amplification of the risks . However, the COVID-19 emergency represents a new context, where little knowledge was available at the beginning of the crisis on the real dangers. Understanding how communications flow on Twitter, shaping the community understanding of the risks in a situation where there is little or debatable knowledge on the dangers appears, therefore, central.
The World Health Organization Influenza Pandemic Plan of 1999 puts considerable attention on the role of non-pharmaceutical public health interventions to contain or delay the spread of a new influenza virus . NPI include early case isolation, social distancing using face masks, closing of schools and businesses shot-down . The application of NPI proved to reduce the spread of the COVID-19 virus in several areas inside China . However, to be effective, NPI requires authorities to agree in advance on a range of containment strategies, the population be informed and willing to adopt the necessary measures . Analyzing the NPI applied during the influenza pandemic of 1918 Whitelaw wrote: “To sum up, it is evident, that no public health law, which has not the endorsation and support of the public generally, can ever be reasonably well enforced.” More recently, the WHO wrote: “Some of the lessons learned from the 2003 severe acute respiratory syndrome (SARS) epidemic can be applied to influenza, including the success of public campaigns to encourage self-recognition of illness, telephone hotlines providing medical advice, and early isolation when potential patients seek health care.” Several variables have proved necessary to get public endorsement for the application of NPI such as the perceived risk, severity of the consequences as well as response efficacy of the adopted measures . Therefore, while NPI has proved to be effective in limiting the spread of a pandemic, there must be a public endorsement of their employability. Giving people the right information is essential to empower them to evaluate their risks and the importance of curtailing their freedoms in terms of virus spread limitation.
The development of social media has changed the communication both in terms of information availability and flow. Collaborative generation and dissemination activities of several types of content are some of the most critical distinct features of social media. According to Brynielsson et al. “Within the field of crisis communication, social media possibilities such as online sharing and social networking have had an impact on the way crisis information is disseminated and updated.” Among the many social media, Twitter has been widely used in emergency management literature due to its specific features. For example, Twitter allows to post comments visible to all audience but also directly targeting a specific audience due to the mention and reply function . The hashtag feature might help support the rapid building of an issue around specific community problems or geographical areas . Research on emergency management shows that Twitter has been used to improve situational awareness among communities . It can inform local communities given emergency alerts , and can act as a tool to facilitate social and political trends for change during emergencies when emotions embolden people . Despite its great potentiality, due to the unchecked and socially constructed nature, messages shared on Twitter might lead to disinformation contributing to the infodemic problem . For example, Panagiotopoulos et al. discuss the social amplification or reduction of risks that on the one hand might be caused due to the Twitter flow and that on the other hand could be monitored by those responsible for risk management. Similarly, Surian et al. used Twitter discussions about human papillomavirus vaccines for clustering opinions and detecting risks for public health.
The COVID-19 is a global emergency “which started in Wuhan in China in early December 2019, brought into the notice of the authorities in late December, early January 2020, and, after investigation, was declared as an emergency in the third week of January 2020” . At the time we are writing the COVID-19 has killed almost 2.5 million people worldwide. However, just a few months earlier, the nature and danger of the virus were hotly contested. The US Surgeon General, Jerome Adams tweeted on February 1st, 2020 “Roses are red, violets are blue, risk is low for #coronarvirus, but high for the #flu” . On March 9th, 2020 the US President Donald Trump tweeted: “Last year 37,000 Americans died from the common Flu. Nothing is shut down, life and the economy go on... Think about that” . When it became clear that the situation was much worse, and commenting on his previous statements on Twitter he later said: “circumstances change but it was a true statement at the time it was made” . Therefore, the COVID-19 emergency differs from other emergencies as knowledge of the real risks was mainly unknown or at least debated at the early stages of development of the pandemic. The development of the COVID-19 pandemic demonstrates the spread of fake news, false information based on non-checked facts . In March 2020, a pool developed by YouGov and the Economist revealed that 13% of Americans believed the COVID-19 crisis a hoax, while even world leaders’ social media posts had to be deleted for spreading misinformation about the Coronavirus . The development of false and unchecked information, recently named infodemic during the COVID-19 emergency is peculiar compared to other crisis. The limited scientific knowledge available and the lack of developing consensus among the population increased the initial spread of the virus due to the specific nature of the NPI required. Twitter has proved to be “the dominant social reporting tool to spread information on social crises” . Previous studies employing crisis and emergency risk communication models are based on the monitoring of the risks and the communication of warnings to avoid social amplification of the risks . However, the COVID-19 emergency represents a new context, where little knowledge was available at the beginning of the crisis on the real dangers. Understanding how communications flow on Twitter, shaping the community understanding of the risks in a situation where there is little or debatable knowledge on the dangers appears, therefore, central.
Our analysis included three steps. First, we explored the main topics in messages by five groups with regular twitter communication and sizable numbers of followers: institutions, news sources, elected officials, scientists and social media influencers using topic modelling methods. Second, we used social network analysis to assess the size and reach of social networks and identify boundary spanning opportunities (sources and messages that span social networks) . Third, we conducted a chi-square trend analysis that analyzed the impact of the mounting crisis on the themes in social media message. Data collection We downloaded tweets posted on the topic of COVID-19 infection in Italy from February 11th to March 10th, 2020. A tweet is an online posting created by a Twitter user limited to 280 characters or less. Once published, the tweet will appear on the Twitter home pages of all users who follow the induvial who released the message. Users might retweet messages, amplifying selected and extending the spread of certain discussions. Twitter is the most heavily used micro-blogging platform in the world and provides access to its data. Although Twitter represents only a part of available social media, a number of studies have used Twitter data, with studies showing it is a reasonable proxy and representation of political, social and scientific opinions . We selected tweets based on their contents using both keywords and the hashtags: virus, Coronavirus, and COVID-19. Other keywords, such as, for example, SARS-CoV-2, were excluded since the tweets mentioning those words were few and also reporting the word “virus”. We received messages tweeted in Italian from the Twitter company and focused on the top retweeted messages, using an inclusion criterion that included more than 50% of total retweeted messages and ignored messages that did not attract attention from users. We only used the number of retweets as a metrics of virality because, since our interest was about examining the infodemic phenomenon, we were interested in the diffusion of the messages, instead of considering the users’ reactions (e.g., likes, feelings, comments and replies). Data analysis We analyzed the content in the data using Python (Python Software Foundation) and its topic modelling function to detect the main topics discussed in the messages using a computer-aided content analysis . Content analysis provides a useful and multifaceted, methodological framework for Twitter analysis and supports the structuring of textual data by enabling categorizing and coding . Within content analysis, topic modelling is a type of statistical modelling for discovering abstract “topics” that occur in a collection of documents or as in our case tweets. Latent Dirichlet Allocation (LDA) approach was used to classify and code text into particular topics . The original list obtained from the statistical analysis was then manually coded by the authors (MM, PT, and MLT). The emerging codes were circulated among the researchers, and the list of codes was included in a codebook. Several conference calls/meetings were held to fine-tune the codebook and to group codes that related to the same phenomena. We further analyzed the data until conceptual saturation was reached and no new codes or categories were generated or merged together . In addition, we manually coded the most retweeted messages by senders using the description provided by the users themselves in the presentation of their account using open coding . In some cases, when the account’s presentation was not enough to define a sender, we searched his/her profession or role using the web. This coding approach means that we created new codes according to the senders’ descriptions of their accounts, so creating categories reflecting the concepts about the types of actors. We iterated the aggregation and creation of codes until reaching a conceptual saturation with significant categories of actors. Therefore, we aggregated senders of tweets into five distinct categories: Institutions (e.g., messages from the government or the Italian NHI), News sources(e.g., messages from TV channels or journalists), Politicians (e.g., messages from personal accounts of politicians or political parties), Science Sources (e.g., messages from scientists), and Influencers (i.e. all the other influencing users, including V.I.P.’s, celebrities, and private users who accounted for a large number of retweets, using a cut-off point of 1400 retweets). We employed a chi-square test of independence with standardized residuals to search for similarities and differences in topics discussed by source (e.g. topics mainly discussed by institutions, politicians, etc.) using R software . As a second step, we analyzed the development of discussions and messages over time. We chose three periods: a) before February 24th; from Feb 25th to March 1st, when the number of infected individuals exceeded 200, and few regions of the country had implemented social distancing; and c) between March 1st and March 10th when the entire country was in lock-down. The numbers related to the daily spread of the disease are reported in Appendix 1 (Table ). The social network map using the ForceAtlas2 algorithm was produced using the software Gephi, open-source software for graph and network analysis that measures the relationships and flows between people, groups, or organizations . The layout provided by the software supported the grouping and alignment of nodes connected together and helped to determine the current community state of social networks and to identify boundary spanning opportunities. As a third step, a chi-square trend analysis was employed to search for linear trends between the COVID-19 crisis and the number of retweets from each source (i.e., influencers, institutions, news, politicians, scientific sources) for each topic and the total number of retweets were analyzed and compared to available COVID-19 morbidity and mortality data.
We downloaded tweets posted on the topic of COVID-19 infection in Italy from February 11th to March 10th, 2020. A tweet is an online posting created by a Twitter user limited to 280 characters or less. Once published, the tweet will appear on the Twitter home pages of all users who follow the induvial who released the message. Users might retweet messages, amplifying selected and extending the spread of certain discussions. Twitter is the most heavily used micro-blogging platform in the world and provides access to its data. Although Twitter represents only a part of available social media, a number of studies have used Twitter data, with studies showing it is a reasonable proxy and representation of political, social and scientific opinions . We selected tweets based on their contents using both keywords and the hashtags: virus, Coronavirus, and COVID-19. Other keywords, such as, for example, SARS-CoV-2, were excluded since the tweets mentioning those words were few and also reporting the word “virus”. We received messages tweeted in Italian from the Twitter company and focused on the top retweeted messages, using an inclusion criterion that included more than 50% of total retweeted messages and ignored messages that did not attract attention from users. We only used the number of retweets as a metrics of virality because, since our interest was about examining the infodemic phenomenon, we were interested in the diffusion of the messages, instead of considering the users’ reactions (e.g., likes, feelings, comments and replies).
We analyzed the content in the data using Python (Python Software Foundation) and its topic modelling function to detect the main topics discussed in the messages using a computer-aided content analysis . Content analysis provides a useful and multifaceted, methodological framework for Twitter analysis and supports the structuring of textual data by enabling categorizing and coding . Within content analysis, topic modelling is a type of statistical modelling for discovering abstract “topics” that occur in a collection of documents or as in our case tweets. Latent Dirichlet Allocation (LDA) approach was used to classify and code text into particular topics . The original list obtained from the statistical analysis was then manually coded by the authors (MM, PT, and MLT). The emerging codes were circulated among the researchers, and the list of codes was included in a codebook. Several conference calls/meetings were held to fine-tune the codebook and to group codes that related to the same phenomena. We further analyzed the data until conceptual saturation was reached and no new codes or categories were generated or merged together . In addition, we manually coded the most retweeted messages by senders using the description provided by the users themselves in the presentation of their account using open coding . In some cases, when the account’s presentation was not enough to define a sender, we searched his/her profession or role using the web. This coding approach means that we created new codes according to the senders’ descriptions of their accounts, so creating categories reflecting the concepts about the types of actors. We iterated the aggregation and creation of codes until reaching a conceptual saturation with significant categories of actors. Therefore, we aggregated senders of tweets into five distinct categories: Institutions (e.g., messages from the government or the Italian NHI), News sources(e.g., messages from TV channels or journalists), Politicians (e.g., messages from personal accounts of politicians or political parties), Science Sources (e.g., messages from scientists), and Influencers (i.e. all the other influencing users, including V.I.P.’s, celebrities, and private users who accounted for a large number of retweets, using a cut-off point of 1400 retweets). We employed a chi-square test of independence with standardized residuals to search for similarities and differences in topics discussed by source (e.g. topics mainly discussed by institutions, politicians, etc.) using R software . As a second step, we analyzed the development of discussions and messages over time. We chose three periods: a) before February 24th; from Feb 25th to March 1st, when the number of infected individuals exceeded 200, and few regions of the country had implemented social distancing; and c) between March 1st and March 10th when the entire country was in lock-down. The numbers related to the daily spread of the disease are reported in Appendix 1 (Table ). The social network map using the ForceAtlas2 algorithm was produced using the software Gephi, open-source software for graph and network analysis that measures the relationships and flows between people, groups, or organizations . The layout provided by the software supported the grouping and alignment of nodes connected together and helped to determine the current community state of social networks and to identify boundary spanning opportunities. As a third step, a chi-square trend analysis was employed to search for linear trends between the COVID-19 crisis and the number of retweets from each source (i.e., influencers, institutions, news, politicians, scientific sources) for each topic and the total number of retweets were analyzed and compared to available COVID-19 morbidity and mortality data.
Topic analysis Our data encompassed 74,306 messages that were retweeted more than 1.2 million times from a total number of 2.3 million assessed retweets. The data analysis revealed 14 major themes that were intensely discussed by the five groups. Table reports the main topics discussed, with examples of each type of message, the number of retweets and the keywords used in the classification process. The chi-square analysis revealed significant differences between the topics discussed by each group (χ 2= 8437.5, df = 52, p < 0.001). We produced a double-entry table (topics of rows and actors on columns) and compared the actual results with expected results from the chi-square analysis. The differences between the expected versus the actual results were then divided by the square root of the variance function/expected value to obtain the Pearson’s residuals. The topic analysis was developed using the function chisq in R. The results are shown in Fig. . Positive residuals are coloured in blue, defining an attraction between the corresponding rows (topics) and the column (actors). Negative residuals are coloured in red showing repulsion (negative association) between the corresponding row and column variables. The results show that influencers had higher standardized residuals, suggesting a higher than expected number of messages for the specific actor for messages that spoke to fear of foreigners and blamed immigrants in Italy for starting the COVID-19 outbreak. Politicians had higher residuals (suggesting higher than expected numbers of messages) for messages connected with managing the economic fallout, and to support citizens and businesses and hospital during the crisis. Not surprisingly, infection risks and rates and epidemiological information commonly originated from Scientific sources. In contrast, News sources were mainly concerned with the closing of entertainment, restaurants, schools and universities, identifying early cases of infections and highlighting the slowdown in the economy. The Institutional sources had a higher propensity for information and guidance for directing the behaviour of citizen. Actor type relevance The findings for the Social Network Analysis suggested a prevalence of specific messages during the three periods. (Fig. ) During the first time-period, messaging was dominated by influencers with several prominent actors that attracted and guided the national discourse. The results demonstrated, however, that during the most critical days, February 19th and 20th, when the first Italians tested positive for COVID-19 , the average percentage of retweets for influencers fell from 55% to 25% of the daily total, with scientific sources rising from an average of 8% to 42–48% of total tweets. The second time-period shows that the news channels and broadcasters were receiving more attention, but influencers were still relevant and often undermined the scientific messaging. During the third time-period, scientific sources began to dominate the discussion, building public confidence with messaging flow that was topically congruent and connected to news sources with both emphasizing key messages for dealing with the pandemic. Topic and actor type relevance during the crisis development The chi-square trend analyses follow the three main periods previously discussed (Tables and ). During the three time-periods, the infection rates were increasing with the total number of cases moving from three cases on Feb 11th to 1694 cases on March 1st, and to 10,142 cases on March 10th. The results showed that some topics dropped dramatically in their trending as the crisis intensified. This includes tweets promoting anti-immigrant propaganda and fearmongering against foreigners. Other topics, particularly science-based and practical information, grew in their relative importance and urgency. The reduced messaging by influencers made room for a range of sources to contribute solutions and build confidence. This included scientific sources, politicians and institution, who collectively contributed to messaging that sought to build social trust and community activation.
Our data encompassed 74,306 messages that were retweeted more than 1.2 million times from a total number of 2.3 million assessed retweets. The data analysis revealed 14 major themes that were intensely discussed by the five groups. Table reports the main topics discussed, with examples of each type of message, the number of retweets and the keywords used in the classification process. The chi-square analysis revealed significant differences between the topics discussed by each group (χ 2= 8437.5, df = 52, p < 0.001). We produced a double-entry table (topics of rows and actors on columns) and compared the actual results with expected results from the chi-square analysis. The differences between the expected versus the actual results were then divided by the square root of the variance function/expected value to obtain the Pearson’s residuals. The topic analysis was developed using the function chisq in R. The results are shown in Fig. . Positive residuals are coloured in blue, defining an attraction between the corresponding rows (topics) and the column (actors). Negative residuals are coloured in red showing repulsion (negative association) between the corresponding row and column variables. The results show that influencers had higher standardized residuals, suggesting a higher than expected number of messages for the specific actor for messages that spoke to fear of foreigners and blamed immigrants in Italy for starting the COVID-19 outbreak. Politicians had higher residuals (suggesting higher than expected numbers of messages) for messages connected with managing the economic fallout, and to support citizens and businesses and hospital during the crisis. Not surprisingly, infection risks and rates and epidemiological information commonly originated from Scientific sources. In contrast, News sources were mainly concerned with the closing of entertainment, restaurants, schools and universities, identifying early cases of infections and highlighting the slowdown in the economy. The Institutional sources had a higher propensity for information and guidance for directing the behaviour of citizen.
The findings for the Social Network Analysis suggested a prevalence of specific messages during the three periods. (Fig. ) During the first time-period, messaging was dominated by influencers with several prominent actors that attracted and guided the national discourse. The results demonstrated, however, that during the most critical days, February 19th and 20th, when the first Italians tested positive for COVID-19 , the average percentage of retweets for influencers fell from 55% to 25% of the daily total, with scientific sources rising from an average of 8% to 42–48% of total tweets. The second time-period shows that the news channels and broadcasters were receiving more attention, but influencers were still relevant and often undermined the scientific messaging. During the third time-period, scientific sources began to dominate the discussion, building public confidence with messaging flow that was topically congruent and connected to news sources with both emphasizing key messages for dealing with the pandemic.
The chi-square trend analyses follow the three main periods previously discussed (Tables and ). During the three time-periods, the infection rates were increasing with the total number of cases moving from three cases on Feb 11th to 1694 cases on March 1st, and to 10,142 cases on March 10th. The results showed that some topics dropped dramatically in their trending as the crisis intensified. This includes tweets promoting anti-immigrant propaganda and fearmongering against foreigners. Other topics, particularly science-based and practical information, grew in their relative importance and urgency. The reduced messaging by influencers made room for a range of sources to contribute solutions and build confidence. This included scientific sources, politicians and institution, who collectively contributed to messaging that sought to build social trust and community activation.
This paper contributes to the significant body of literature examining the COVID-19 pandemic. Our results show the importance of social media in supporting a community-wide and ultimately nationally-coordinated effort to build public awareness and engagement during the COVID-19 pandemic. Analysis of social media themes highlights useful and damaging messages, including false claims that blamed COVID-19 on foreigners. Interestingly, several actors without a scientific background encouraged discussions about how best to prepare for COVID-19. In contrast, contentious and dissenting voices might slow the process to reach a fact-driven consensus, and even promote counterproductive actions such as downplaying the danger of public crowding. Twitter and other social and digital means of communication have become essential channels for physicians and scientists to spreading health and public health information . Twitter proved to be a powerful knowledge translation tool to translate and transfer meaningful knowledge from healthcare authorities to the population, about what should be done or avoided . Twitter feed ultimately benefitted the global community during the pandemic by serving as a readily accessible and trusted source for reliable and science-based information . COVID-19 also highlighted the danger of a serious infodemic , with an over-abundance of information with uncertain accuracy, making it difficult for individuals to select the sources for actionable information and guidance. Political actions and actors impacted the coronavirus spread, by denying the COVID-19 realities, and promoted social interactions under the motto “let’s keep our habits, we can’t stop Milan and Italy.” These public actions effectively helped spread the virus, only to back-track days later as the number of affected COVID-19 people dramatically increased and the pandemic mortality emerged . Some tweets contained messages promoting fear and falsely blaming foreigners’ for the illness and created a false promotion that underestimated the severe impact of COVID-19. These actions undermined social trust and preparedness, exacerbating a general sense of fear and panic across Italy as the pandemic was fast becoming a debilitating national emergency. There are many lessons to be learned for other nations about the experience of social media messages in Italy. Paramount is the importance of clear, convincing, fact-based and actionable messaging, to overcome misinformation and to garner the trust of the public during a challenging time. A handful of countries, including Singapore, Taiwan, Germany, Iceland, have managed to stay on top of their outbreaks by adhering to radical transparency, promoting community activism, while aggressively testing to find cases, quarantining contacts, and keeping viral transmission from going into an exponential growth phase . Taiwan’s early recognition of the crisis, daily briefings to the public, and simple health messaging allowed the government to reassure the public by delivering timely, accurate, and transparent information regarding the evolving epidemic . Additionally, the lessons from China and Singapore show that when the NPI reached the target of limiting the Covid-19 spread, new emerging cases can require the reintroduction of containment measures. Timeliness in accurate public messaging using both smart technology and traditional press conferences by trusted leaders was crucial. Our study has inherent limitations. While Twitter has become an essential platform for textual communication and information sharing, the tweets represent only a sample of people’s communication and human interactions. However, there are many studies using Twitter data that consider it a reasonable proxy of the user’s mental models . We realize that the tweets also exhibit specific characteristics of brevity, fluidity, and meaning embedded in a broader context. This can pose challenges for the researcher engaged in content analyses. Third, the systematic limitation to analyze twitter methods “statistically” may be appropriate but as of yet un-validated for analyzing twitter, key messages, key actors, and evolution over time. Fourth, the COVID-19 pandemic and its rapidity evolving social, psychological, economic, or geographically attributes mean that the analysis is accurate only in the context of the limited time frame being studied. The co-variates may undoubtedly alter the findings and statistical results to where they are applied. Further development of statistical procedures which can be validated and replicated is needed with this type of data. Fifth, our analysis focused only on tweets posted in Italy. The different spread of social networks among generations or genders in different cultures and countries might affect our results. Finally, we focused on a limited period of time. Extending our analyses likely could have led to different findings.
Societies need to respond quickly to pandemics to protect the health and well-being of their citizens. In this exploratory work, we developed a systematic method to analyze Twitter messages, understand key messages, key actors and their evolution over time. We showed that social media could be used effectively to respond to the pandemic through transparent and convincing messages, rooted in scientific knowledge, to help build confidence and improve the implementation effectiveness of policies, ending up as an effective knowledge translation tool to facilitate the communication with the population. Despite the infodemic, the threats from fake news, trolls and bots that automatically produce and share contents on social media, the scientific voice and other institutional sources of information were able to dominate and be spread among people over the acute phase of the outbreak, so gaining their trust and public engagement in facing the pandemic. Countries that are transparent on the state of their country and provide truthful health information for their citizens will likely gain public trust and more rapid NPI uptake and compliance. Finally, we believe that an area for future research entails examining how social media and other readily collected public data could be leveraged to improve methods for public messaging, assessing the spread of the virus and support appropriate public health actions. Twitter can be leveraged to improve population health preparedness, better and early public response and support public policy actions .
|
Functional characterization and molecular fingerprinting of potential phosphate solubilizing bacterial candidates from Shisham rhizosphere | b6ab75d6-151b-468c-b46e-adb4bb50c97d | 10147649 | Microbiology[mh] | Shisham ( Dalbergia sissoo Roxb.) is a nitrogen fixing tree species belonging to the family-Fabaceae. It is similar to nitrogen fixing agricultural crops in forming root nodules through symbiosis with rhizobia and is extensively used for commercial practices . Shisham is an important timber tree, growing throughout sub-Himalayan tract upto 1200 m of altitude . It is a valued tree species, and its global popularity has increased greatly over the past few decades owing to its potent fast growth, multi-purpose uses and nitrogen-fixing ability . Shisham is used for its high-quality timber, fuelwood with various byproducts as well as for its intercropping system to produce maximum yield of forage based farming system . The decrease in productivity of Shisham affects the source of income of rural families and global economy . Wilting is one of the most devastating plant diseases worldwide causing Shisham mortality. Soilborne pathogens are important production constraints leading to reduced growth, yield loss, and threaten adult tree and young tree populations . The declining population of Shisham can be effectively protected by the application of functional bacteria , . The rhizosphere represents the intense zone of plant–microbe interaction. Among the microbes bacteria are the most abundant taxonomic group . Rhizospheric bacteria that exhibit plant growth promotion characteristics are known as plant growth promoting rhizobacteria (PGPR). Rhizobacteria promotes growth in plants directly by synthesizing plant growth hormones, enhanced uptake of nutrients or indirectly by inhibiting the phytopathogens attack as well as many other mechanisms . Rhizobacteria promoting plant growth and providing protection from wide range of plant pathogens via several direct and indirect mode of actions are called microbial biological control agents (MBCA). PGPR are potential agents for disease suppression of several phytopathogens and induction of systemic resistance against nematodes and insects via synthesis of antimicrobial metabolites . In addition, some other mechanisms of beneficial bacteria such as competition, interfering with the host immunity to establish a mutualistic association with the host and antagonism can protect plants against pathogen attack , . Berg and Koskella reported that beneficial members of plant microbiome can contribute to boost host immune functions. Moreover, immunity of plant may play a major role in determining growth and accommodation of beneficial microbes which further contributes to the association of a stable microbial community inside as well as in their root zone, thus playing a crucial role in regulating variations in microbiota composition , . However, the microbial composition in rhizospheric region is decided majorly by plant secondary metabolites and root exudates . Several PGPRs ( Rhizobium , Burkholderia , Klebsiella , Pseudomonas , Azotobacter and Bacillus ) are reported for N 2 fixation, P solubilization, siderophore production, zinc solubilization and phytohormone production , . Bacteria solubilize the insoluble phosphate in medium by oxidation of glucose to gluconic acid or its derivative i.e., 2-ketogluconic acid. The production of acid reduces the soil pH which aids in mineralization of phosphate and makes it available to the plant root . Beneficial effects of P solubilizing bacteria on crops has been evaluated by Raymond . Numerous applications of PSB make it essential to explore their diversity which may further help to design alternative strategies and use these potent strains as bioinoculants. Moreover, community structure is seen to be affected by several factors such as host interaction, fertilizer application, irrigation, and climate . In order to identify endogenous PSB with greater ability to survive under stress conditions and develop as biofertilizers for diverse crops, it is required to learn about the bacterial diversity among them to assess the extent of changes in the bacterial community. The selection of the dominant strains of bacteria involved in P-solubilization that are used as biofertilizers can be aided by knowledge of the molecular diversity of PSB. Various organic acids viz., gluconic acid, citric acid, malic acid, oxalic acid, fumaric acid, malonic acid, tartaric acid, propionic acid, glyoxylic acid, butyric acid, glutaric acid and adipic acid are reported for phosphate solubilization but among these gluconic acid is the most commonly produced by phosphate solubilizing bacteria , . Production of gluconic acid mainly occurs in bacteria with the help of enzyme glucose dehydrogenase encoded by gcd (glucose dehydrogenase) gene under direct oxidation pathway . A cofactor pyrroloquinoline quinone encoded by pqq operon consisting of six core genes ( pqqA-F ) is required for the effective functioning of GDH enzyme . The cloning and expression of genes involved in biosynthesis of PQQ showed the importance of gluconic acid and its derivative 2-ketogluconic acid production in phosphate solubilization . Sonnenburg and Sonnenburg suggested that signature genes primarily involved in pqq biosynthesis pathway are pqqA , pqqC , pqqD , and pqqE which were recognized by gene knockout experiment. The majority of identified pqq genes in bacterial isolates belongs to α, β, γ class of proteobacteria and primarily present in gram-negative bacteria . The commonly found bacteria genera with PQQ gene belongs to Acinetobacter, Azotobacter, Beijerinckia, Bradyrhizobium, Burkholderia, Erwinia, Gluconoacetobacter, Klebsiella, Gluconobacter, Methylobacillus, Methylobacterium, Mycobacterium, Pseudomonas, Rhizobium, Streptomyces, and Xanthomonas . Growth conditions such as high glucose concentration as a carbon source and high insoluble phosphate level significantly affect the biosynthesis of glucose dehydrogenase and PQQ level . The characterization of PSB colonizing rhizosphere of Shisham trees and their effects on plant growth under stress condition remains under explored. Hence it is necessary to investigate the effect of P solubilizing bacterial diversity on soil health, and mechanism involved in the rhizospheric region. Therefore, in the current study we aimed to explore the rhizosphere of Shisham trees from various unexplored soils and screen the most effective P-solubilizing bacteria in mitigating environmental stress conditions. The primary aim of this study was to: find the optimal P-solubilizing bacteria that are most effective under various environmental and growth conditions by screening the Shisham of different unexplored soils. functional and molecular characterization of isolated PSB strains to explore the biodiversity among different rhizospheric regions of Shisham. validate the corresponding mechanisms and genes involved in P-solubilization.
Soil sampling Soil samples were collected from three different rhizospheric regions of Shisham forests located at three sites: Pantnagar (29.0222° N latitude, 79.4908°E longitude), Lachhiwala (30.2230°N latitude, 78.0766° E longitude) and Tanakpur (29.0722°N latitude, 80.1066° E longitude) regions in India. The three sites represent different agroecological zones and niches, each diversified with distinct vegetation cover, soil, and other natural resources. The Shisham trees in Lachhiwala and Tanakpur forest were healthy but the Shisham trees in Pantnagar forest were diseased. Furthermore, from each forest region, three trees were identified for rhizospheric soil sample collection within the range of 1–10 m. The samples were collected in triplicate from rhizospheric soil (15 cm depth) of Shisham trees of a forest region during winter season. The samples were pooled together to generate a representative composite sample and transferred in sterilized soil sampling plastic bags (zip lock) to the laboratory and kept at − 20 °C till further analysis. Soil physico-chemical characteristics Soil samples were air dried for soil physico-chemical analysis. Soil physico-chemical analysis included determination of soil pH, electrical conductivity, total organic carbon (TOC), total nitrogen (TN), available potassium (AK) and trace elements such as Fe and Zn , . To verify the results statistically, One-Way Analysis of Variance (ANOVA) was used at level of p < 0.05 using SPSS software. Soil enzymatic assays Each soil sample was analyzed for their significant contribution of microbial community (soil microbial enzymes) in rhizospheric region with the help of spectrophotometer. The exact concentration of the analyzed soil enzymes was determined by plotting a standard curve. All soil microbial enzymatic assays were performed in triplicates. Dehydrogenase activity was determined as reported by Thalmann . Fluorescein diacetate (FDA) activity was determined according to Inbar et al. . Alkaline and acid phosphomonoesterases activity was assayed according to a method of Tabatabai and Bremner . Urease activity in soil was determined as given by Kandeler and Gerber . Soil microbial enumeration Enumeration of bacteria in rhizospheric soil (total aerobic bacterial count) was enumerated through serial dilution pour plating on Angles’s medium whereas for phosphate solubilizing bacteria, Pikovskaya medium was used . The bacterial population in per gram of soil was determined by counting and expressing as colony forming unit (CFU) after 2–3 days of incubation at 30 ± 1 °C. Both the media were supplemented with 100 mgL –1 of cycloheximide to inhibit fungal growth . Selection of rhizobacterial isolates based on biochemical and plant growth promotion traits The biochemical characterization of the bacterial isolates was conducted this includes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase, and catalase activity. In vitro PGP traits of the rhizobacterial isolates were assessed for production of siderophore, indoleacetic acid (IAA), ammonia, hydrogen cyanide (HCN) and solubilization of zinc. For all these biochemical and functional traits analysis protocols were followed described by Joshi et al. . Phosphate solubilizing efficiency of phosphate solubilizers Phosphate solubilizing bacteria were isolated using serial dilution and pour plate technique on Pikovskaya’s medium (PK medium). To provide optimum growth conditions the inoculated plates were incubated at 28 ± 2 °C for 3–4 days within the incubator. The bacterial colonies surrounded with halo zones were picked and restreaked to obtain pure cultures. All pure cultures were spot inoculated on Pikovskaya medium and incubated at 30 °C for 48 h. Halo zones surrounding the colonies were measured. Solubilizing efficiency (SE) and solubilization index (SI) of PSB isolates were also calculated , . [12pt]{minimal}
$${}( {{}} ) \, = }}{{}} 100$$ Solubilization Efficiency SE = Diameter of bacterial growth Diameter of clear zone × 100 [12pt]{minimal}
$${}( {{}} ) \, = } + {}}}{{}}$$ Solubilization Index SI = Diameter of bacterial growth + Diameter of clear zone Diameter of colony Quantitative estimation of phosphorous Selected bacterial cultures were subjected to transfer in 25 mL National Botanical Research Institute's phosphate growth medium (NBRIP: glucose (10 g L −1 ), calcium phosphate (5 g L −1 ), magnesium chloride hexahydrate (5 g L −1 ), magnesium sulfate heptahydrate (0.25 g L −1 ), potassium chloride (0.2 g L −1 ), ammonium sulfate (0.1 g L −1 )) for 72 h at 28 ± 1 °C and 120 rpm. After completion of successive growth period the bacterial isolates were centrifuged for 15 min at 5000 rpm. Supernatant (1 mL) was taken in a test tube and added with 60% perchloric acid (0.4 mL); molybdate solution: 2.5% ammonium molybdate in 5 N H 2 SO 4 (0.4 mL); colouring reagent: 10 mL of 5% sodium bisulphate, 20% sodium sulphite, 25 g 1-amino-2-naphthol-4-sulphonic acid (0.2 mL), and triple distilled water or TDW (4 mL) subsequently. After that the test tubes were incubated for 30 min at room temperature. The appearance and intensity of blue color exhibit the total concentration of phosphorus and measured the absorbance at 640 nm . Molecular characterization, identification, and phylogenetic analysis Genomic DNA of all 18 isolates were extracted using alkaline lysis method and the purity was checked in agarose gel. Amplification of 16S rDNA was done using template DNA of all 18 bacterial isolates recovered from different provenances of Shisham. Forward primer GM3f (5ʹ TACCTTGTTGTTACGACTT3ʹ) and reverse primer GM4r (5ʹTACCTTGTTACGACTT3ʹ) were used for amplification of 16S rDNA gene. PCR product was electrophoresed in 1.0% agarose gel at 80 mA for 1 h along with λ DNA/EcoRI/HindIII double digest ladder . Further the purified 16S rDNA amplicon products were sent to Biotech Centre UDSC, New Delhi for sequence analysis. The obtained nucleotide sequences were processed for homology using BLASTn through EzBioCloud's database ( https://www.ezbiocloud.net/identify ) . All the sequences were aligned with MEGA7 (Molecular Evolutionary Genetic Analysis version 7.0) software for constructing a phylogenetic tree . Fingerprinting of selected bacterial isolates Purified 16S rDNA amplicon of each 18 isolate was digested with three tetra cutter restriction endonucleases namely Msp I, Alu I, and BsuR I. The digestion reaction was set in a reaction mixture of 25 µL, which included 20 µL amplicon and reaction mixture with 1X assay buffer for enzyme, 1U/reaction of each restriction endonuclease Msp I, Alu I, and Fast digest BsuR I. For digestion with Msp I and Alu I reaction mixture was kept at 37 °C for 2 h and BsuR I fast digest for 5 min. Thereafter the enzymatic reaction was inactivated by adding loading dye and kept at − 20 °C. The product of restriction digestion was analyzed on 2.5% agarose gel electrophoresed at 60 V. The band pattern was visualized under UV Gel documentation system . Amplification of pqqA and pqqC gene The bacterial genomic DNA of selected isolates was subjected to amplification using Gen Amp PCR System 9700 (Applied Biosystems) in a 20 μL volume. The primers used for pqqA gene were forward primer pqq A-F: 5ʹATGTGGACCAAACCTGCATAC3ʹ and reverse primer pqq A-R:5ʹGCGGTTAGCGAAGTACATGGT3ʹ, while the primer set for pqqC gene were forward primer pqq C-F:5ʹATTACCCTGCAGCACTACAC3ʹ and reverse primer pqq C-R:5ʹ CCAGAGGATATCCAGCTTGAAC 3ʹ. The composition of reagents was:10X Assay Buffer (1×), MgCl 2 (0.5 mM), dNTPs (200 µM), Taq polymerase (1U), Forward and reverse primer (0.3 μM), Template DNA (50 ng). For the amplification of 2 PQQ genes the reaction conditions were as follows: Initial denaturation 94 °C for 5 min (1 cycle); denaturation 94 °C for 30 s (30 cycles); annealing 50 °C for 30 s; extension 72 °C for 1 min; and final extension 72 °C for 10 min (1 cycle). The presence of amplified fragments was checked on 2.0% ( w / v ) Agarose gel with 50 bp DNA ladder . Statistical analysis The experimental data (qualitative and quantitative) were statistically processed using t-test (Cochran and approx t-test). All results were expressed as mean ± SEM. F values for which p < 0.05 were considered significant .
Soil samples were collected from three different rhizospheric regions of Shisham forests located at three sites: Pantnagar (29.0222° N latitude, 79.4908°E longitude), Lachhiwala (30.2230°N latitude, 78.0766° E longitude) and Tanakpur (29.0722°N latitude, 80.1066° E longitude) regions in India. The three sites represent different agroecological zones and niches, each diversified with distinct vegetation cover, soil, and other natural resources. The Shisham trees in Lachhiwala and Tanakpur forest were healthy but the Shisham trees in Pantnagar forest were diseased. Furthermore, from each forest region, three trees were identified for rhizospheric soil sample collection within the range of 1–10 m. The samples were collected in triplicate from rhizospheric soil (15 cm depth) of Shisham trees of a forest region during winter season. The samples were pooled together to generate a representative composite sample and transferred in sterilized soil sampling plastic bags (zip lock) to the laboratory and kept at − 20 °C till further analysis.
Soil samples were air dried for soil physico-chemical analysis. Soil physico-chemical analysis included determination of soil pH, electrical conductivity, total organic carbon (TOC), total nitrogen (TN), available potassium (AK) and trace elements such as Fe and Zn , . To verify the results statistically, One-Way Analysis of Variance (ANOVA) was used at level of p < 0.05 using SPSS software.
Each soil sample was analyzed for their significant contribution of microbial community (soil microbial enzymes) in rhizospheric region with the help of spectrophotometer. The exact concentration of the analyzed soil enzymes was determined by plotting a standard curve. All soil microbial enzymatic assays were performed in triplicates. Dehydrogenase activity was determined as reported by Thalmann . Fluorescein diacetate (FDA) activity was determined according to Inbar et al. . Alkaline and acid phosphomonoesterases activity was assayed according to a method of Tabatabai and Bremner . Urease activity in soil was determined as given by Kandeler and Gerber .
Enumeration of bacteria in rhizospheric soil (total aerobic bacterial count) was enumerated through serial dilution pour plating on Angles’s medium whereas for phosphate solubilizing bacteria, Pikovskaya medium was used . The bacterial population in per gram of soil was determined by counting and expressing as colony forming unit (CFU) after 2–3 days of incubation at 30 ± 1 °C. Both the media were supplemented with 100 mgL –1 of cycloheximide to inhibit fungal growth .
The biochemical characterization of the bacterial isolates was conducted this includes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase, and catalase activity. In vitro PGP traits of the rhizobacterial isolates were assessed for production of siderophore, indoleacetic acid (IAA), ammonia, hydrogen cyanide (HCN) and solubilization of zinc. For all these biochemical and functional traits analysis protocols were followed described by Joshi et al. .
Phosphate solubilizing bacteria were isolated using serial dilution and pour plate technique on Pikovskaya’s medium (PK medium). To provide optimum growth conditions the inoculated plates were incubated at 28 ± 2 °C for 3–4 days within the incubator. The bacterial colonies surrounded with halo zones were picked and restreaked to obtain pure cultures. All pure cultures were spot inoculated on Pikovskaya medium and incubated at 30 °C for 48 h. Halo zones surrounding the colonies were measured. Solubilizing efficiency (SE) and solubilization index (SI) of PSB isolates were also calculated , . [12pt]{minimal}
$${}( {{}} ) \, = }}{{}} 100$$ Solubilization Efficiency SE = Diameter of bacterial growth Diameter of clear zone × 100 [12pt]{minimal}
$${}( {{}} ) \, = } + {}}}{{}}$$ Solubilization Index SI = Diameter of bacterial growth + Diameter of clear zone Diameter of colony
Selected bacterial cultures were subjected to transfer in 25 mL National Botanical Research Institute's phosphate growth medium (NBRIP: glucose (10 g L −1 ), calcium phosphate (5 g L −1 ), magnesium chloride hexahydrate (5 g L −1 ), magnesium sulfate heptahydrate (0.25 g L −1 ), potassium chloride (0.2 g L −1 ), ammonium sulfate (0.1 g L −1 )) for 72 h at 28 ± 1 °C and 120 rpm. After completion of successive growth period the bacterial isolates were centrifuged for 15 min at 5000 rpm. Supernatant (1 mL) was taken in a test tube and added with 60% perchloric acid (0.4 mL); molybdate solution: 2.5% ammonium molybdate in 5 N H 2 SO 4 (0.4 mL); colouring reagent: 10 mL of 5% sodium bisulphate, 20% sodium sulphite, 25 g 1-amino-2-naphthol-4-sulphonic acid (0.2 mL), and triple distilled water or TDW (4 mL) subsequently. After that the test tubes were incubated for 30 min at room temperature. The appearance and intensity of blue color exhibit the total concentration of phosphorus and measured the absorbance at 640 nm .
Genomic DNA of all 18 isolates were extracted using alkaline lysis method and the purity was checked in agarose gel. Amplification of 16S rDNA was done using template DNA of all 18 bacterial isolates recovered from different provenances of Shisham. Forward primer GM3f (5ʹ TACCTTGTTGTTACGACTT3ʹ) and reverse primer GM4r (5ʹTACCTTGTTACGACTT3ʹ) were used for amplification of 16S rDNA gene. PCR product was electrophoresed in 1.0% agarose gel at 80 mA for 1 h along with λ DNA/EcoRI/HindIII double digest ladder . Further the purified 16S rDNA amplicon products were sent to Biotech Centre UDSC, New Delhi for sequence analysis. The obtained nucleotide sequences were processed for homology using BLASTn through EzBioCloud's database ( https://www.ezbiocloud.net/identify ) . All the sequences were aligned with MEGA7 (Molecular Evolutionary Genetic Analysis version 7.0) software for constructing a phylogenetic tree .
Purified 16S rDNA amplicon of each 18 isolate was digested with three tetra cutter restriction endonucleases namely Msp I, Alu I, and BsuR I. The digestion reaction was set in a reaction mixture of 25 µL, which included 20 µL amplicon and reaction mixture with 1X assay buffer for enzyme, 1U/reaction of each restriction endonuclease Msp I, Alu I, and Fast digest BsuR I. For digestion with Msp I and Alu I reaction mixture was kept at 37 °C for 2 h and BsuR I fast digest for 5 min. Thereafter the enzymatic reaction was inactivated by adding loading dye and kept at − 20 °C. The product of restriction digestion was analyzed on 2.5% agarose gel electrophoresed at 60 V. The band pattern was visualized under UV Gel documentation system .
pqqA and pqqC gene The bacterial genomic DNA of selected isolates was subjected to amplification using Gen Amp PCR System 9700 (Applied Biosystems) in a 20 μL volume. The primers used for pqqA gene were forward primer pqq A-F: 5ʹATGTGGACCAAACCTGCATAC3ʹ and reverse primer pqq A-R:5ʹGCGGTTAGCGAAGTACATGGT3ʹ, while the primer set for pqqC gene were forward primer pqq C-F:5ʹATTACCCTGCAGCACTACAC3ʹ and reverse primer pqq C-R:5ʹ CCAGAGGATATCCAGCTTGAAC 3ʹ. The composition of reagents was:10X Assay Buffer (1×), MgCl 2 (0.5 mM), dNTPs (200 µM), Taq polymerase (1U), Forward and reverse primer (0.3 μM), Template DNA (50 ng). For the amplification of 2 PQQ genes the reaction conditions were as follows: Initial denaturation 94 °C for 5 min (1 cycle); denaturation 94 °C for 30 s (30 cycles); annealing 50 °C for 30 s; extension 72 °C for 1 min; and final extension 72 °C for 10 min (1 cycle). The presence of amplified fragments was checked on 2.0% ( w / v ) Agarose gel with 50 bp DNA ladder .
The experimental data (qualitative and quantitative) were statistically processed using t-test (Cochran and approx t-test). All results were expressed as mean ± SEM. F values for which p < 0.05 were considered significant .
Soil physico-chemical analysis Soil physico-chemical analysis was performed to assess the soil nutrient status and health. The analysis of macro and micro-nutrient contents along with some other important parameters (soil type, pH and electrical conductivity) of Shisham rhizospheric soil from three different provenances are presented in Table . The soil texture was silty loam in Lachhiwala and Tanakpur region whereas it was silty clay loam in Pantnagar. Soil pH in Pantnagar soil was 6.85 which was comparatively higher than Lachhiwala and Tanakpur (6.00 and 6.12) respectively. Electrical conductivity was found to be 0.11 dS m −1 for Lachhiwala, 0.14 dS m −1 for Tanakpur and 0.13 dS m −1 for Pantnagar. Total organic carbon in Pantnagar, Lachhiwala and Tanakpur was 42,750 kg hac −1 , 19,500 kg hac −1 and 25,000 kg hac −1 respectively. Further available phosphorus in soil was highest in Lacchiwala (56.48 kg hac −1 ) as compared to Pantnagar (37.86 kg hac −1 ) and Tanakpur (46.87 kg hac −1 ). Total nitrogen (TN) in Pantnagar, Lachhiwala and Tanakpur was 137.98 kg hac −1 , 163.07 kg hac −1 and 100.35 kg hac −1 respectively while soil potassium was 505.34 kg hac −1 , 434.11 kg hac −1 , and 520.12 kg hac −1 respectively. The iron (22.6 kg hac −1 ) and zinc (11 kg hac −1 ) content was highest in Tanakpur soil as compared to the Lachhiwala (Fe: 12.5 kg hac −1 ; Zn: 9.3 kg hac −1 ) and Pantnagar soil (Fe: 11 kg hac −1 ; Zn: 0.2 kg hac −1 ) (Table ). Soil nutrient properties were analysed statistically. The ANOVA (p < 0.05) results revealed highly significant differences between soil nutrient values at Lachhiwala, Tanakpur and Pantnagar. Soil enzymatic activities Alkaline phosphatase, acid phosphatase, fluorescein diacetate, dehydrogenase and urease activities of Shisham rhizospheric soils from Shisham forests at three different location was done. Alkaline phosphatase activity ranged from 442.8 µg PNP g −1 h −1 at Tanakpur to 1196.2 µg PNP g −1 h −1 at Lachhiwala. The highest activity of acid phosphatase enzyme was found to be 1109.6 µg PNP g −1 h −1 in Lachhiwala followed by Tanakpur (654.5 µg PNP g −1 h −1 ) and Pantnagar (574.8 µg PNP g −1 h −1 ). FDA (fluorescein diacetate) activity in Lachhiwala, Tanakpur and Pantnagar was 291.2 µg fluorescein g −1 h −1 , 372.6 µg fluorescein g −1 h −1 and 325 µg fluorescein g −1 h −1 respectively. Dehydrogenase enzyme levels were two-fold higher in case of Tanakpur forest (4300 µg TPF g −1 h −1 ) as compared to Lachhiwala forest (1880 µg TPF g −1 h −1 ) while least activity was reported in the case of Pantnagar forest (1770 µg TPF g −1 h −1 ). The maximum urease activity was observed in the Shisham rhizosphere soil from Lachiwala forest (241 µg NH 4 + g −1 h −1 ) followed by Pantnagar forest (192.25 µg NH 4 + g −1 h −1 ). The minimum urease activity was observed in rhizosphere soil from Tanakpur forest (65.78 µg NH 4 + g −1 h −1 ). There was a significant difference (p < 0.05) between the enzyme activities of Shisham rhizosphere soils from three provenances (Table ). Soil microbial enumeration Total population as enumerated on Angle’s medium in Shisham rhizospheric soil of Tanakpur, Lacchiwala and Pantnagar was 2.76 × 10 4 , 1.87 × 10 4 and 1.96 × 10 4 cfu g −1 of soil. However, count of phosphorus solubilizing bacteria was 1.20 × 10 4 cfu g −1 , 1.55 × 10 4 cfu g −1 and 1.06 × 10 4 cfu g −1 soil at Tanakpur, Lachhiwala and Pantnagar respectively. Name of the selected PSB bacterial isolates were coded according to their native rhizospheric region from different forest (Table ). Bacterial morphological characteristics were also observed (Table ). Solubilizing efficiency of P solubilizer Overall, 18 PSBs, eight from Lacchiwala, four from Pantnagar and six from Tanakpur were recovered on Pikovaskya agar plates from Shisham rhizospheric soil of different provenances (Fig. ). All eighteen bacterial isolates exhibited zone of solubilization in the range 1.16 to 4.75 cm on pikovaskya agar plates (Fig. ). The isolates from Lachhiwala provenance depicted higher phosphate solubilising index as compared to Tanakpur and Pantnagar. Highest P solubilising index (PSI) was detected in L4 and lowest in T4 (Table ). Bacteria-mediated phosphorous solubilization was quantified by following Fiske and Subbarow (1925) method. Out of the eighteen bacterial isolates, L4 solubilized highest amount of phosphorus (891.38 µg mL −1 ) and T4 (285.78 µg mL −1 ) solubilized lowest amount of phosphorus (Fig. ). The solubilizing index of PSBs as detected on Pikovaskya agar plates positively correlated with amount of P solubilized in NBRIP liquid medium. Functional characterization of PSB recovered from Shisham rhizosphere Selected PSB strains were screened for various enzyme activities and plant growth promotory properties. All PSBs exhibited one or more of enzymes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase and catalase activity (Fig. ). Among the eighteen isolates, four isolates such as L7, L8, T3 and T5 were positive for amylase production. Urease test was found positive for L4, P2, T2 and T6. All the isolates except L4, T1, T3, T4, T5 and T6 exhibited nitrate reduction. Out of eighteen PSBs, five; L7, L8, P2, T3 and T5 were positive for lipase activity. Only eight isolates, L7, L8, P1, P4, T1, T3, T4 and T5 were positive for xylanase production. Five isolates from Lachhiwala (L1, L2, L5, L7 and L8), one each from Pantnagar (P2) and Tanakpur (T5) were positive for protease production as a halo zone was observed around bacterial growth on skim milk agar plates. Six out of eighteen isolates L1, L5, P1, P3, P4 and T1 were positive for pectinase enzyme production. Except L6, T1 and T3 all the isolates were able to produce catalase as production of gas bubbles and effervescence was observed after addition of drops of H 2 O 2 . Amongst 18 PSBs, seven isolates were able to solubilize Zinc. Zinc solubilization efficiency was highest in L3, L5, P2 and T2 and lowest in L4, P3 and P4. Five isolates were positive for siderophore production. Orange halos were maximum in L7, L8, T1, T3 and minimum in L1. IAA production was maximum in L4, P3, T1, T2 and T4 and least in L1, L5, L6, L7, L8, P1, P4, T3 and T5. All the isolates except P2 were negative for HCN production. Ammonia production in peptone water marked by color change from yellow to orange was found positive for all isolates except L6, L7, T1 and T3. Hence, all bacterial isolates exhibited multiple PGP traits along with inorganic P solubilization (Fig. ; Table ). Molecular characterization, identification and phylogenetic analysis PCR amplification of 16S rDNA gene region of all eighteen PSB isolates recovered from Shisham rhizosphere of different provenances, resulted in a distinct band of 1492 bp in the agarose gel (Fig. ). Bacterial isolates were identified by comparison of 16S rDNA sequences with reference strains using BLASTn programme. Out of eighteen isolates seven were identified within genus Pseudomonas . Out of these seven, 3 isolates were from Lachhiwala (L1, L3 and L5) whereas four were from Pantnagar (P1, P2, P3 and P4). Four isolates were identified as Streptomyces sp. (L6, L7, T3 and T5) , two each as Klebsiella sp. (L4 and T2) and Staphylococcus sp. (L2 and T6) , and one each as Pantoea sp . (L8), Kitasatospora sp . (T1) and Micrococcus sp . (T4). All eighteen strains were identified as belonging to 7 genera distributed across three phyla: Proteobacteria, Actinobacteria and Firmicutes. The genera identified were: Pseudomonas , Klebsiella, Streptomyces, Pantoea, Kitasatospora, Micrococcus and Staphylococcus (Fig. ; Table ). Seven strains were identified as L1 (98.14% similarity to Pseudomonas simiae strain NR 042392.1), L3 and L5 (99.16% similarity to Pseudomonas paralactis strain KP756923), P1 (98.89% similarity to Pseudomonas hunanensis strain JX545210), P2 (97% similarity to Pseudomonas aeruginosa strain NR 117678.1), P3 (98.14% similarity to Pseudomonas putida strain Z76667.1) and P4 (98.42% similarity to Pseudomonas plecoglossicida strain NR 114226.1). Strain L8 was identified as Pantoea sp. (96.83% similarity to Pantoea conspicua strain NR 116247.1). Two strains L4 and T2 were identified as Klebsiella sp . (99.51% similarity to Klebsiella variicola strain CP010523 and 96.37% similarity to Klebsiella singaporensis strain AF250285). Strain L2 was assigned to (97.98%) Staphylococcus petrasii (NR 118450.1) and T6 to Staphylococcus pasteuri (NR 114435.1). Isolates belonging to phylum Actinobacteria were clustered together which includes T4 (98.0% similarity to Micrococcus yunnanensis strain NR 116578.1), T1 (93.86% similarity to Kitasatospora kifunensis strain NR 112085.2), L6 (87% similarity to Streptomyces curacoi strain KY585954.1), L7 (95% similarity to Streptomyces cellostaticus strain NR 112304.1), T3 (94.22% similarity to Streptomyces antibioticus strain NR 043348.1), T5 (97.92% similarity to Streptomyces griseoruber strain NR 041086.1). The 16S rDNA sequences of all eighteen isolates are deposited in NCBI GenBank under accession numbers MG966339-MG966355 (Table ). DNA fingerprinting of selected bacterial isolates Based on Amplified ribosomal DNA (rDNA) restriction analysis (ARDRA) profiles and morphological characters, isolates were selected and taxonomically identified. After the restriction of amplified 16S rDNA with endonucleases generated 100–1000 bp fragment of DNA. Restriction enzyme AluI generated 2–4 well resolved bands of 700 bp to 100 bp in all eighteen isolates. Endonuclease AluI resolved all 18 strains into eight different genotypes (Fig. a). The restriction pattern of amplified 16S rDNA region with restriction enzyme BsuI resulted in 2 to 4 well resolved bands in a range from 1000 to 100 bp. The restriction with Bsu I resolved all 18 strains into six different genotypes (Fig. b). The restriction profiles obtained with Msp I enzyme resulted in one to three well resolved bands in a region from 200 to 600 bp (Fig. c). All eighteen isolates were distinguished into eight genotypes. Combined UPGMA dendrogram based on DNA fingerprint profiles An unweighted pair group means average (UPGMA) dendrogram calculating Jaccard’s coefficient was constructed based on analysis of the ARDRA profile of 16S rDNA region with Alu I, Bsu I and Msp I through NTSYSpc version 2.0 software . Restriction profile was interpreted on the basis of bands developed. Similar banding patterns obtained after combination of the three independent digestions were grouped. The isolates depicted higher polymorphism with Alu I and Msp I as compared to Bsu I. Eight different restriction patterns were obtained with Alu I and Msp I whereas six with Bsu I. Phylogenetic relationship within gram negative and gram-positive isolates were revealed by UPGMA clustering of isolates separately. In a UPGMA cluster based on RFLP with Alu I, BsuI and Msp I, all gram-negative strains grouped into two major clusters A and B (Fig. a). Cluster A included five isolates L1, L3, L5, P1 and P3. The cluster A was further divided into two subclusters. Subcluster, I included L1, L3 and L5 and subcluster II grouped P1 and P3. L3 and L5 in subcluster I exhibited 100% similarity and was related to L1 at a distance of 0.80 on Jaccard’s scale. Cluster B included the remaining strains P4 and P2 related at a distance of 0.60 Jaccard’s scale. For gram positive bacteria a separate dendogram was constructed (Fig. b). Majority of gram-positive isolates were placed in a single cluster which was further divided into two subclusters at 0.80 on Jaccard’s scale. Subcluster I included two isolates L6 and L7 whereas subcluster II included T3 and T5. Isolate T4 was placed singly on an outlying branch at a distance of 0.60 on Jaccard’s scale. Isolate T1 was distantly (0.35 on Jaccard’s scale) related to all the other strains. Amplification of pqqA and pqqC genes To confirm the conserved genomic region ( pqqA and pqqC ) for gluconic acid formation, PQQ gene amplification was done with the help of designed primer. Out of eighteen only sixteen bacterial isolates showed positive amplification for pqqC gene (82 bp band) whereas six bacterial isolates namely L1, L3, L5, P1, P3 and P4 showed positive amplification of pqqA gene (72 bp band) (Figs. , ). All six isolates with positive amplification for both pqqC and pqqA genes suggests that they possess two crucial genes of PQQ biosynthesis pathway.
Soil physico-chemical analysis was performed to assess the soil nutrient status and health. The analysis of macro and micro-nutrient contents along with some other important parameters (soil type, pH and electrical conductivity) of Shisham rhizospheric soil from three different provenances are presented in Table . The soil texture was silty loam in Lachhiwala and Tanakpur region whereas it was silty clay loam in Pantnagar. Soil pH in Pantnagar soil was 6.85 which was comparatively higher than Lachhiwala and Tanakpur (6.00 and 6.12) respectively. Electrical conductivity was found to be 0.11 dS m −1 for Lachhiwala, 0.14 dS m −1 for Tanakpur and 0.13 dS m −1 for Pantnagar. Total organic carbon in Pantnagar, Lachhiwala and Tanakpur was 42,750 kg hac −1 , 19,500 kg hac −1 and 25,000 kg hac −1 respectively. Further available phosphorus in soil was highest in Lacchiwala (56.48 kg hac −1 ) as compared to Pantnagar (37.86 kg hac −1 ) and Tanakpur (46.87 kg hac −1 ). Total nitrogen (TN) in Pantnagar, Lachhiwala and Tanakpur was 137.98 kg hac −1 , 163.07 kg hac −1 and 100.35 kg hac −1 respectively while soil potassium was 505.34 kg hac −1 , 434.11 kg hac −1 , and 520.12 kg hac −1 respectively. The iron (22.6 kg hac −1 ) and zinc (11 kg hac −1 ) content was highest in Tanakpur soil as compared to the Lachhiwala (Fe: 12.5 kg hac −1 ; Zn: 9.3 kg hac −1 ) and Pantnagar soil (Fe: 11 kg hac −1 ; Zn: 0.2 kg hac −1 ) (Table ). Soil nutrient properties were analysed statistically. The ANOVA (p < 0.05) results revealed highly significant differences between soil nutrient values at Lachhiwala, Tanakpur and Pantnagar.
Alkaline phosphatase, acid phosphatase, fluorescein diacetate, dehydrogenase and urease activities of Shisham rhizospheric soils from Shisham forests at three different location was done. Alkaline phosphatase activity ranged from 442.8 µg PNP g −1 h −1 at Tanakpur to 1196.2 µg PNP g −1 h −1 at Lachhiwala. The highest activity of acid phosphatase enzyme was found to be 1109.6 µg PNP g −1 h −1 in Lachhiwala followed by Tanakpur (654.5 µg PNP g −1 h −1 ) and Pantnagar (574.8 µg PNP g −1 h −1 ). FDA (fluorescein diacetate) activity in Lachhiwala, Tanakpur and Pantnagar was 291.2 µg fluorescein g −1 h −1 , 372.6 µg fluorescein g −1 h −1 and 325 µg fluorescein g −1 h −1 respectively. Dehydrogenase enzyme levels were two-fold higher in case of Tanakpur forest (4300 µg TPF g −1 h −1 ) as compared to Lachhiwala forest (1880 µg TPF g −1 h −1 ) while least activity was reported in the case of Pantnagar forest (1770 µg TPF g −1 h −1 ). The maximum urease activity was observed in the Shisham rhizosphere soil from Lachiwala forest (241 µg NH 4 + g −1 h −1 ) followed by Pantnagar forest (192.25 µg NH 4 + g −1 h −1 ). The minimum urease activity was observed in rhizosphere soil from Tanakpur forest (65.78 µg NH 4 + g −1 h −1 ). There was a significant difference (p < 0.05) between the enzyme activities of Shisham rhizosphere soils from three provenances (Table ).
Total population as enumerated on Angle’s medium in Shisham rhizospheric soil of Tanakpur, Lacchiwala and Pantnagar was 2.76 × 10 4 , 1.87 × 10 4 and 1.96 × 10 4 cfu g −1 of soil. However, count of phosphorus solubilizing bacteria was 1.20 × 10 4 cfu g −1 , 1.55 × 10 4 cfu g −1 and 1.06 × 10 4 cfu g −1 soil at Tanakpur, Lachhiwala and Pantnagar respectively. Name of the selected PSB bacterial isolates were coded according to their native rhizospheric region from different forest (Table ). Bacterial morphological characteristics were also observed (Table ).
Overall, 18 PSBs, eight from Lacchiwala, four from Pantnagar and six from Tanakpur were recovered on Pikovaskya agar plates from Shisham rhizospheric soil of different provenances (Fig. ). All eighteen bacterial isolates exhibited zone of solubilization in the range 1.16 to 4.75 cm on pikovaskya agar plates (Fig. ). The isolates from Lachhiwala provenance depicted higher phosphate solubilising index as compared to Tanakpur and Pantnagar. Highest P solubilising index (PSI) was detected in L4 and lowest in T4 (Table ). Bacteria-mediated phosphorous solubilization was quantified by following Fiske and Subbarow (1925) method. Out of the eighteen bacterial isolates, L4 solubilized highest amount of phosphorus (891.38 µg mL −1 ) and T4 (285.78 µg mL −1 ) solubilized lowest amount of phosphorus (Fig. ). The solubilizing index of PSBs as detected on Pikovaskya agar plates positively correlated with amount of P solubilized in NBRIP liquid medium.
Selected PSB strains were screened for various enzyme activities and plant growth promotory properties. All PSBs exhibited one or more of enzymes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase and catalase activity (Fig. ). Among the eighteen isolates, four isolates such as L7, L8, T3 and T5 were positive for amylase production. Urease test was found positive for L4, P2, T2 and T6. All the isolates except L4, T1, T3, T4, T5 and T6 exhibited nitrate reduction. Out of eighteen PSBs, five; L7, L8, P2, T3 and T5 were positive for lipase activity. Only eight isolates, L7, L8, P1, P4, T1, T3, T4 and T5 were positive for xylanase production. Five isolates from Lachhiwala (L1, L2, L5, L7 and L8), one each from Pantnagar (P2) and Tanakpur (T5) were positive for protease production as a halo zone was observed around bacterial growth on skim milk agar plates. Six out of eighteen isolates L1, L5, P1, P3, P4 and T1 were positive for pectinase enzyme production. Except L6, T1 and T3 all the isolates were able to produce catalase as production of gas bubbles and effervescence was observed after addition of drops of H 2 O 2 . Amongst 18 PSBs, seven isolates were able to solubilize Zinc. Zinc solubilization efficiency was highest in L3, L5, P2 and T2 and lowest in L4, P3 and P4. Five isolates were positive for siderophore production. Orange halos were maximum in L7, L8, T1, T3 and minimum in L1. IAA production was maximum in L4, P3, T1, T2 and T4 and least in L1, L5, L6, L7, L8, P1, P4, T3 and T5. All the isolates except P2 were negative for HCN production. Ammonia production in peptone water marked by color change from yellow to orange was found positive for all isolates except L6, L7, T1 and T3. Hence, all bacterial isolates exhibited multiple PGP traits along with inorganic P solubilization (Fig. ; Table ).
PCR amplification of 16S rDNA gene region of all eighteen PSB isolates recovered from Shisham rhizosphere of different provenances, resulted in a distinct band of 1492 bp in the agarose gel (Fig. ). Bacterial isolates were identified by comparison of 16S rDNA sequences with reference strains using BLASTn programme. Out of eighteen isolates seven were identified within genus Pseudomonas . Out of these seven, 3 isolates were from Lachhiwala (L1, L3 and L5) whereas four were from Pantnagar (P1, P2, P3 and P4). Four isolates were identified as Streptomyces sp. (L6, L7, T3 and T5) , two each as Klebsiella sp. (L4 and T2) and Staphylococcus sp. (L2 and T6) , and one each as Pantoea sp . (L8), Kitasatospora sp . (T1) and Micrococcus sp . (T4). All eighteen strains were identified as belonging to 7 genera distributed across three phyla: Proteobacteria, Actinobacteria and Firmicutes. The genera identified were: Pseudomonas , Klebsiella, Streptomyces, Pantoea, Kitasatospora, Micrococcus and Staphylococcus (Fig. ; Table ). Seven strains were identified as L1 (98.14% similarity to Pseudomonas simiae strain NR 042392.1), L3 and L5 (99.16% similarity to Pseudomonas paralactis strain KP756923), P1 (98.89% similarity to Pseudomonas hunanensis strain JX545210), P2 (97% similarity to Pseudomonas aeruginosa strain NR 117678.1), P3 (98.14% similarity to Pseudomonas putida strain Z76667.1) and P4 (98.42% similarity to Pseudomonas plecoglossicida strain NR 114226.1). Strain L8 was identified as Pantoea sp. (96.83% similarity to Pantoea conspicua strain NR 116247.1). Two strains L4 and T2 were identified as Klebsiella sp . (99.51% similarity to Klebsiella variicola strain CP010523 and 96.37% similarity to Klebsiella singaporensis strain AF250285). Strain L2 was assigned to (97.98%) Staphylococcus petrasii (NR 118450.1) and T6 to Staphylococcus pasteuri (NR 114435.1). Isolates belonging to phylum Actinobacteria were clustered together which includes T4 (98.0% similarity to Micrococcus yunnanensis strain NR 116578.1), T1 (93.86% similarity to Kitasatospora kifunensis strain NR 112085.2), L6 (87% similarity to Streptomyces curacoi strain KY585954.1), L7 (95% similarity to Streptomyces cellostaticus strain NR 112304.1), T3 (94.22% similarity to Streptomyces antibioticus strain NR 043348.1), T5 (97.92% similarity to Streptomyces griseoruber strain NR 041086.1). The 16S rDNA sequences of all eighteen isolates are deposited in NCBI GenBank under accession numbers MG966339-MG966355 (Table ).
Based on Amplified ribosomal DNA (rDNA) restriction analysis (ARDRA) profiles and morphological characters, isolates were selected and taxonomically identified. After the restriction of amplified 16S rDNA with endonucleases generated 100–1000 bp fragment of DNA. Restriction enzyme AluI generated 2–4 well resolved bands of 700 bp to 100 bp in all eighteen isolates. Endonuclease AluI resolved all 18 strains into eight different genotypes (Fig. a). The restriction pattern of amplified 16S rDNA region with restriction enzyme BsuI resulted in 2 to 4 well resolved bands in a range from 1000 to 100 bp. The restriction with Bsu I resolved all 18 strains into six different genotypes (Fig. b). The restriction profiles obtained with Msp I enzyme resulted in one to three well resolved bands in a region from 200 to 600 bp (Fig. c). All eighteen isolates were distinguished into eight genotypes.
An unweighted pair group means average (UPGMA) dendrogram calculating Jaccard’s coefficient was constructed based on analysis of the ARDRA profile of 16S rDNA region with Alu I, Bsu I and Msp I through NTSYSpc version 2.0 software . Restriction profile was interpreted on the basis of bands developed. Similar banding patterns obtained after combination of the three independent digestions were grouped. The isolates depicted higher polymorphism with Alu I and Msp I as compared to Bsu I. Eight different restriction patterns were obtained with Alu I and Msp I whereas six with Bsu I. Phylogenetic relationship within gram negative and gram-positive isolates were revealed by UPGMA clustering of isolates separately. In a UPGMA cluster based on RFLP with Alu I, BsuI and Msp I, all gram-negative strains grouped into two major clusters A and B (Fig. a). Cluster A included five isolates L1, L3, L5, P1 and P3. The cluster A was further divided into two subclusters. Subcluster, I included L1, L3 and L5 and subcluster II grouped P1 and P3. L3 and L5 in subcluster I exhibited 100% similarity and was related to L1 at a distance of 0.80 on Jaccard’s scale. Cluster B included the remaining strains P4 and P2 related at a distance of 0.60 Jaccard’s scale. For gram positive bacteria a separate dendogram was constructed (Fig. b). Majority of gram-positive isolates were placed in a single cluster which was further divided into two subclusters at 0.80 on Jaccard’s scale. Subcluster I included two isolates L6 and L7 whereas subcluster II included T3 and T5. Isolate T4 was placed singly on an outlying branch at a distance of 0.60 on Jaccard’s scale. Isolate T1 was distantly (0.35 on Jaccard’s scale) related to all the other strains.
pqqA and pqqC genes To confirm the conserved genomic region ( pqqA and pqqC ) for gluconic acid formation, PQQ gene amplification was done with the help of designed primer. Out of eighteen only sixteen bacterial isolates showed positive amplification for pqqC gene (82 bp band) whereas six bacterial isolates namely L1, L3, L5, P1, P3 and P4 showed positive amplification of pqqA gene (72 bp band) (Figs. , ). All six isolates with positive amplification for both pqqC and pqqA genes suggests that they possess two crucial genes of PQQ biosynthesis pathway.
Among microorganisms, bacteria play an important role in biogeochemical cycling. Bacteria solubilize the insoluble organic and inorganic phosphates in the soil, which makes P available to plant roots and is considered the most eco-friendly and economic method . PSB are well known for disease suppression by synthesizing pathogen inhibitory compounds as well as enhancing the plant immune response. Hence the aim of this study was to identify the PSB bacteria which suppress plant disease and enhance the plant growth, bringing dual benefits. Soil of Pantnagar was reported to be silty clay loam with high pH, high carbon, low phosphorus, and low micronutrients (Fe and Zn) content in comparison with other two samples. At a time of sampling, it was observed that rate of mortality in Shisham trees was maximum in Pantnagar soil as compared to others. The reason behind the Shisham mortality in Pantnagar may be the deficiency of micro and macronutrients in soil. Micronutrients are essential for proper functioning of plants as well as to promote growth of beneficial microbes in rhizospheric region . Inadequate amount of micronutrients in soil directly affects the metabolic capacity of plants which further directly affects the tolerance towards biotic and abiotic stress . Macronutrient and micronutrient deficiency in soil affects the yield in crops and plants, invite disease and resist their propagation , . Hence, low nutrient status (low P, Fe, Zn) in soil of Pantnagar might be associated with disease incidence and spread. Correlation analysis showed that the values of P solubilizing index and the amount of soluble phosphorus in liquid NBRIP medium shared a highly significant relationship (t value = 15.30069) which indicates that the strains with the highest potential to solubilize Ca 3 (PO 4 ) 2 in liquid media were the same as the ones that exhibited the greatest halos. Moreover, slightly high pH of soil could also be the cause for mortality in Pantnagar Shisham forest. Higher soil pH hinders the availability of phosphorous to the plants and alter biological, geological, and chemical environment of soil which leads to disease in plants . Soil enzyme activities and nutrient status are closely related. Soil organic carbon (SOC), phosphorus, nitrogen, potassium and other essential micronutrients significantly affect the activities of the soil enzymes . In the present study Fluorescein diacetate (FDA) and Dehydrogenase activity correlated with culturable microbial population or respiratory metabolism . The dehydrogenase and FDA activities were higher in Shisham rhizosphere from Tanakpur where the aerobic bacterial population was also highest. Soil phosphatase activity is pH sensitive, depending on the number and diversity of soil resident microflora . The acid phosphatase, alkaline phosphatase, and urease activities were higher in Shisham rhizosphere from Lachhiwala which is due to the measure of total microbial population. Pathogens are encouraged to colonise the rhizosphere by increasing carbon levels, whereas helpful bacteria may do so if there are more nutrients available. Hence it indicates that not the individual C and nutrient content but the ratio that affects the rhizosphere microbiome which ultimately alters the soil enzyme status. Organic phosphate is solubilized by group of phosphatase enzymes like acid and alkaline phosphatases, phytases, and nucleotidases . Among which the extracellular acid and alkaline phosphatase play a key role in solubilization. In the terrestrial ecosystem the acid phosphatase is primary synthesized by plant roots and microbial action whereas the alkaline phosphatase is synthesized by microbes . Li et al., studied the role of acidic and alkaline phosphatase in subalpine forest region and found that alkaline phosphatase actively participates rather than the acid phosphatase in mineralization and solubilization of phosphorus further making it available to plant roots. The present study investigated that the different environmental abiotic factors and total organic carbon content of the soil at different provenances could significantly affect PSB population in the rhizospheric region. Microbial population density at rhizospheric region depends on several factors such as physico-chemical property of soil, water potential of soil, change in soil pH, partial pressure of oxygen and chemical composition of plant exudation . Microbial enzymes such as amylase, xylanase, lipase, pectinase, and protease were found actively involved in organic matter decomposition, plant growth promotion and are important in the disease suppression , . Bacterial genus such as Pseudomonas, Micrococcus, Paenibacillus, Streptococcus, Curtobacterium, Chryseobacterium are reported to produce hydrolytic enzymes which degrade the cell wall of pathogenic organisms . Out of eighteen isolates recovered in the present study, seven isolates were positive for Zn solubilization. Production of organic acids is the prominent mechanism for Zn solubilization by rhizobacteria . Out of 18 isolates, five isolates exhibited yellow to orange halo zone on CAS amended nutrient agar plates for siderophore production. Siderophores may enhance plant growth by mobilizing metal cations including Fe and Cu as well as indirectly stimulate P solubilization and disease suppression , . Siderophore positive PGPRs scavenge Fe 3+ from complex compounds under iron starvation condition and thus indirectly release P in soil . Moreover, they deprive phytopathogen from iron and hence lead to disease suppression . In the present study, fourteen isolates were potent IAA producers. IAA production by bacteria enhances root growth which leads to increased nutrient uptake in plants . The ability of IAA production by microbes varies among different species and is also affected by availability of substrate, culture conditions and stage of growth . HCN is also reported to play crucial role in disease suppression . Ammonia promotes plant growth by providing N to plants and suppressing plant pathogen . Isolated bacterial strains were related to genus Streptomyces, Pseudomonas , Klebsiella, Staphylococcus, Kitasatospora, Pantoea, and Micrococcus. Several members within these genera are identified for exhibiting plant growth promoting ability, P solubilizing and biocontrol properties for example: Pantoea, Pseudomonas and Streptomyces , Klebsiella and Micrococcus , , Kitasatospora reported for resistance to pest attack and growth promotion in Teak ( Tectonagrandis ), which is a valuable tree species . Sixteen bacterial isolates showed positive amplification for 82 bp pqqC gene whereas six for 72 bp pqqA gene. The bacterial isolates that exhibited amplicon for pqqA gene were also positive for pqqC gene, this suggests that they possess two crucial genes of PQQ biosynthesis pathway. PQQ operon ( pqqA-pqqF ) organize differently in different PSB isolates such as in PQQ operon of Acinetobacter calcoaceticus , the pqqF gene is absent , . While in P. fluorescens B16, the PQQ operon was composed of 11 genes namely, pqqA , B , C , D , E , F , H , I , J , K , and pqqM . Hence the presence of pqqA and pqqC gene in bacterial isolates could be prominent candidate for solubilization of insoluble phosphate. Presence of pqqA , pqqC, pqqD and pqqE genes are prerequisite for P solubilization in PSB isolates . Gene pqqA consists of 22 amino acids, a peptide of glutamic acid and tyrosine which serve carbon and nitrogen for PQQ biosysnthesis , . The pqqC gene encodes the pyrroloquinoline quinone synthase C (PqqC), which catalyzes the conversion of 3a-(2-amino-2-carboxy-ethyl)-4,5-dioxo-4,5,6,7,8,9-hexahydroquinoline-7,9-dicarboxylic acid to pyrroloquinoline quinone , . Therefore we can conclude that the selected bacterial isolates might be following gluconic acid mediated mechanism for solubilization of insoluble P in soil. pqqC is ubiquitous in Pseudomonas species . High PQQ-producing bacteria have been identified in bacteria of diverse genera, including Mycobacterium, Acinetobacter, Hyphomicrobium, Gluconobacter, Klebsiella, Polyporus, Ancylobacter, Pseudomonas, Xanthobacter, Methylobacillus, Paracoccus, Methylophilus, Methylobacterium, Thiobacillus and Methylovorus . In the present study, there were several strains in which there was no amplification of pqqA and pqqC genes. However, they were solubilizing phosphorus on pikovaskya medium. The possible reason is that these strains might be solubilizing phosphate via secretion of organic acids other than gluconic acid such as isovaleric acid, lactic acid, isobutyric acid, glycolic acid, acetic acid, oxalic acid, succinic acid and malonic acid. Bacteria like E. coli JM109 (genetically modified), Synechococcus PCC7942 (phosphoenol pyruvate carboxylase ( ppc )); Serratia marcescens and Pseudomonas cepacia ( gabY ) solubilize P by other than PQQ pathway or gene – . Therefore, our finding concludes that the deficiency of nutrients and excess availability of carbon and high pH invite the pathogenic microorganisms which is the main cause of wilt in Pantnagar soil. Most of the selected bacterial strains were previously reported for P solubilization. Mechanism of P solubilization through signature genes such as pqqA and pqqC has been reported for the first time in Shisham forest region.
In this study we found that the nature of soil and their native microbial community play a crucial role for plant growth and protection. To resolve the problem of mortality in forest soil it is necessary to analyze the physicochemical and biological properties of soil. The deficiency of macronutrients, micronutrients, alteration in soil pH and soil enzymes may lead to invite different kinds of plant disease and plant pathogen. The finding and enrichment of the best PGPR bacterial strains could minimize the mortality of Shisham trees and help to enhance the biodiversity. Amplification of phosphate solubilizing gene ( pqqA and pqqC ) in bacterial strain provides strong evidence for the mechanism of phosphate solubilization and their potent solubilizing efficiency. Hence our findings suggested that the bioformulation of bacterial isolates could mitigate the phosphate deficiency and promote plant yield directly as well as indirectly.
Supplementary Information.
|
Computational Design and In Vitro and In Vivo Characterization of an ApoE-Based Synthetic High-Density Lipoprotein for Sepsis Therapy | 304dd9a6-2026-4554-82a8-489b9f0e47ce | 11940477 | Pathologic Processes[mh] | Sepsis affects nearly 19 million people globally each year, with a mortality rate exceeding 30% due to the lack of effective therapies [ , , ]. Despite over 100 clinical trials aimed at targeting various components of the inflammatory and coagulation pathways, patient survival rates have seen little improvement . Sepsis results from a cascade of dysregulated host responses involving multiple steps: Upon infection, bacteria release endotoxins. Endotoxins activate immune effector cells, leading to the production of inflammatory cytokines and chemokines. These inflammatory mediators activate endothelial cells (ECs), causing endothelial dysfunction characterized by vascular leakage, increased leukocyte adhesion, an altered vascular tone, and a shift towards a procoagulant state. Additionally, sepsis induces hemolysis, where broken red blood cells release highly toxic heme, causing cell damage. The elevated levels of inflammatory cytokines and chemokines further contribute to cell damage, releasing damage-associated molecular pattern molecules (DAMPs) that cause a secondary dysregulated immune response . All of these responses culminate in irreversible multi-organ failure and septic death [ , , , ]. A significant challenge is that multiple factors and steps contribute to sepsis, and targeting a single regulatory factor or step has shown limited effectiveness. Therefore, targeting an endogenous factor with multi-protective effects against sepsis might be a promising therapeutic approach . High-density lipoproteins (HDLs), a major component of circulating blood , are well recognized as protective factors against cardiovascular and other chronic inflammatory diseases . HDLs play a crucial role in detoxifying endotoxins. Upon infection, Gram-negative bacteria release lipopolysaccharides (LPSs), which bind to TLR4 on immune effector cells. This binding initiates a downstream signaling cascade, activating proinflammatory genes and leading to the production of high levels of cytokines such as TNF-α and IL-6, which results in cell damage . The proinflammatory cytokines can stimulate immune effector cells to generate more cytokines. A body of evidence indicates that HDLs detoxify LPSs through two mechanisms: (i) HDLs neutralize LPSs [ , , , , , , , , ], most LPSs in circulation exist in an HDL-bound form , and HDL-LPS binding attenuates LPS-TLR4 interactions [ , , , ]. (ii) Recent studies, including ours, suggest that HDLs act together with their receptor, the scavenger receptor BI (SR-BI), to promote LPS clearance [ , , ]. In vitro, HDLs promote SR-BI-mediated LPS uptake by 4-fold in SR-BI-transfected HEK cells and by 2-fold in primary hepatocytes . In vivo, mice deficient in SR-BI or HDLs display impaired LPS clearance in LPS or sepsis animal models [ , , ]. These findings suggest that HDLs neutralize LPSs and promote LPS clearance via SR-BI-mediated LPS uptake, which presents a more efficient mechanism for LPS detoxification than simple neutralization by anti-LPS antibodies. Additionally, lipoteichoic acid (LTA), released by Gram-positive bacteria, activates the TLR2/6 pathway to generate high levels of inflammatory cytokines, causing cell damage. Similar to LPSs, LTA is associated with HDLs in circulation and the binding of HDL-LTA neutralizes LTA . Given the structural similarity between LPSs and LTA, it is likely that HDLs neutralize LTA and promote LTA clearance via SR-BI-mediated LTA uptake. In addition, HDLs have other activities, such as the suppression of inflammatory signaling in immune effector cells and the inhibition of EC activation [ , , , , , , , , , , , , , , , , , , ]. Thus, HDLs may present a promising target for sepsis therapy. Numerous clinical studies have shown that the levels of HDLs drop markedly in septic patients, and this is associated with a poor prognosis [ , , ]. We used ApoA-I-null mice as an HDL-deficient model and tested the role of HDLs using cecal ligation and puncture (CLP) as a model of sepsis. We found that a deficiency in HDLs led to a susceptibility to CLP-induced death, as well as less LPS neutralization and LPS clearance . We further found that increasing the HDL levels by overexpressing ApoA-I improved the survival of CLP-induced mice . These clinical and experimental findings strongly suggest that a decrease in HDL levels is a risk factor for sepsis and that raising the circulating HDL levels may provide an efficient therapy for sepsis. A number of earlier studies showed that sHDL treatment improves the survival of LPS-challenged animals [ , , , ]. Using a Gram-negative bacterial infection model, Quezado et al. showed that the administration of sHDLs suppressed inflammatory cytokine production, but the sHDLs failed to improve the survival in the study due to the toxicity and impurity of the sHDL product . An earlier study using the ApoA-I mimetic peptide 4F showed that the mimetic peptide 4F treatment increased the HDL cholesterol levels and improved the survival of CLP-treated rats . Unfortunately, the survival was only monitored for two days in that study. We recently utilized an improved sHDL, known as ETC642 (the most potent one of the known sHDL particles) , and tested its efficacy in CLP-challenged mice. ETC642 is a 22-amino acid ApoA-I mimetic peptide bound to phospholipids to form sHDL nanoparticles. A single dose of ETC642 increased the HDL cholesterol level for up to 48 h in a dose-dependent manner in humans . We administered ETC642 to C56BL/6J (B6) mice and found that the ETC642 treatment significantly improved the 7-day survival rate of CLP-treated mice . These studies revealed that the administration of an sHDL could be a potentially effective therapeutic approach, but a more potent sHDL nanoparticle is highly desired as a practically useful therapeutic agent for the treatment of sepsis. An HDL is a nanodisc made of a dimer of the ApoA-I protein. Thus, the previously reported sHDL nanoparticles were all based on the ApoA-I sequence and designed by simply modifying the ApoA1 sequence. In this study, we designed new sHDLs by a computational approach based on two considerations: (1) Structurally, HDLs are made of a dimer of the ApoA-I protein, which contains several paired positively and negatively charged residues. We speculated that these pairs of charged residues are essential for the formation of a parallel dimer structure as well as the function of HDLs. (2) Functionally, HDLs neutralize endotoxins and regulate endotoxin-induced inflammatory responses. We employed a validated computational approach capable of reliably predicting the binding free energies of ligands like the endotoxins LPS and LTA when interacting with large nanoparticles like sHDLs, and designed a new generation of sHDL nanoparticles that can potently bind with LPSs and LTA. Using the computational approach, we designed four new types of sHDLs, two based on the ApoA-I sequence (YGZL1 and YGZL2) and two based on the ApoE sequence (YGZL3 and YGZL4). Using two mouse models of sepsis, we demonstrate for the first time that an ApoE-based novel type of sHDL nanoparticle, YGZL3, provides effective protection against sepsis.
2.1. Computational Details 2.1.1. Coarse-Grained (CG) Model of YGZL Peptides In constructing the CG model of YGZL peptides, the process began by mimicking the repeated α-helical fragment of ApoA-I. The protein builder function of PyMol was used to construct a standard α-helical secondary structure with the designed amino acid sequence as an initial atomistic model. This atomistic model was then transformed into a CG model, utilizing the MARTINI force field designed specifically for the CG system . 2.1.2. CG Model of Solvent Molecules, Ions, Sphingomyelin, and 1,2-Dipalmitoyllecithin The CG model for solvent molecules represented four water molecules as a single CG bead. In the case of ions, they were modeled as charged CG beads, with their first hydration shell being implicitly included . For the lipids, specifically sphingomyelin (PPCS) and 1,2-dipalmitoyllecithin (DPPC), we adopted parameters from the MARTINI force field for lipids . 2.1.3. CG Model for the LTA and LPSs Lipopolysaccharides (LPSs) and lipoteichoic acid (LTA) are complex glycophospholipids with distinct structures that play important roles in sepsis. LPSs consist of three main components: a polysaccharide O-antigen, a core oligosaccharide, and a glycolipid moiety known as lipid A . The lipid A component is critical, as it mediates the proinflammatory and cytotoxic effects of LPSs , effectively making it the core moiety of LPSs. On the other hand, the structure of LTA varies among different Gram-positive bacterial species. It typically includes long chains of ribitol or glycerol phosphate, but a glycolipid moiety is a common feature for membrane anchoring , serving as the core moiety of LTA. So, we focused solely on the glycolipid moieties of LPSs and LTA, disregarding their more variable components, which allowed for a more streamlined analysis. Parameterization for LTA was based on the parameters of galactosyldiacylglycerol (DGDG) from the MARTINI force field for glycolipids . We added an additional Q a bead to represent the extra phosphate group in LTA, with the bond and angle parameters sourced from the phosphatidylinositol (PI) parameters in the same force field. For LPSs, the parameterization process involved iteratively fitting the CG model to the all-atomistic model. This approach aligned with the protocol used in Lopez’s research, ensuring a rigorous and accurate modeling process . 2.1.4. Self-Assembly of sHDL Nanoparticles In the assembly of sHDL nanoparticles, we started by constructing the initial model of the simulation system for YGZL peptides. This involved a random distribution of 20 YGZL peptides, 75 PPCS molecules, and 75 DPPC molecules, following a 1:3.75:3.75 molar ratio, which aligned with the composition of the reported HDL-like ETC642 particle. Each system contained 20 α-helical peptides, equivalent to the number of α-helices in two Apo A-I lipoproteins according to the double-belt model, along with 150 lipids. This composition approximated the reported model of discoidal HDL particles [ , , , ]. For a control, a lipid-only particle simulation system was also constructed. This system comprised a random distribution of 75 PPCS molecules and 75 DPPC molecules, without the addition of peptides. Subsequently, both systems were supplemented with CG beads representing water and ions (0.15 M NaCl), creating a physiological environment. The simulations were performed for a total duration of 3 microseconds with a 30 fs integration time step at a temperature of 323 K with an NPT ensemble, using the GROMACS 2016.1 software . This elevated temperature in the CG simulation was chosen to replicate the results of all-atom simulations typically conducted at 300–315 K [ , , ]. 2.1.5. Modeling the Binding Model of sHDL Nanoparticles with LPSs/LTA To simulate the binding conformation of LTA/LPSs with sHDL particles, our approach involved randomly adding a single molecule of LTA or LPSs into the solvent region of the simulation system, either with the assembled sHDL particle or the lipid-only particle (serving as a control). In this process, any solvent beads overlapping with the LTA/LPS molecules were removed. Additionally, the introduction of the LTA/LPS charge was counterbalanced by adding Na + ions. These systems were then subjected to simulations for a total duration of 1 microsecond each, with a 30 fs integration time step at 323 K, conducted using the GROMACS 2016.1 software. 2.1.6. Calculation of Binding Free Energies of sHDL Nanoparticles with LPSs/LTA The binding free energy between LTA/LPSs and sHDL particles was estimated through potential of mean force (PMF) calculations. This process was executed by utilizing the pull code and the weighted histogram analysis method (WHAM) , as implemented in the GROMACS software. The procedure started with the extraction of reaction coordinates from the previously equilibrated LPS/LTA binding simulation systems. This involved pulling the center of mass (COM) of the LPS/LTA molecule 8 nm away from the COM of the sHDL particle across an 80 ns simulation with a 30 fs time step. Following this, 40 sampling windows were set for an additional 10 ns of equilibration with a 30 fs time step and another 10 ns of umbrella sampling at a finer 5 fs time step. These steps were based on the reaction coordinates, spaced at 0.2 nm intervals. To ensure the reliability of the results, the binding free energy of the LPSs/LTA with the lipid-only particle was also estimated, following the same methodological approach for control purposes. Furthermore, to average out any potential fluctuations and enhance the accuracy of the findings, three independent PMF calculations were conducted for each system. 2.2. Reagents The peptide was synthesized by Genscript and the purity was determined to be >95% by an HPLC analysis. Egg sphingomyelin (SM) and 1,2-dipalmitoyl- sn -glycero-3-phosphocholine (DPPC) were purchased from Sigma Aldrich (St. Louis, MO, USA). LPSs ( E. coli K12 ) were purchased from InvivoGen (San Diego, CA, USA). All other reagents were obtained from commercial suppliers and were of analytical grade or higher. sHDL Preparation The sHDL nanoparticles were made by co-lyophilization followed by thermal cycling, as described previously . Briefly, the peptide and phospholipids were combined and dissolved in glacial acetic acid or chloroform at a peptide/SM/DPPC ratio of 1:1:1 by weight. The resulting solution underwent rapid freezing in liquid nitrogen, which was followed by freeze-drying overnight to remove the acid. The lyophilized powder was reconstituted in 1X phosphate-buffered saline (PBS) to the desired final peptide concentration and vortexed to completely dissolve it, forming a cloudy white suspension. The resulting solution was subjected to three heat–cool cycles, with each cycle consisting of 10 min of heating at 55 °C and 10 min of cooling at room temperature, at which point a clear solution was formed. The pH of the final sHDL solution was adjusted to 7.4 with NaOH and was then passed through a 0.2 µm sterile filter. 2.3. Studies in Animals and In Vitro Analysis 2.3.1. In Vivo Efficacy Analysis We used two sepsis models in this study: cecal ligation and puncture (CLP)-induced and bacterial infection-induced pneumonia. CLP (21G needle, 2/3 ligation) was performed on 10- to 12-week-old male C57BL/6J mice as described previously . Two hours after the CLP, the mice were treated with 100 µL of PBS or sHDLs at 7.5 mg peptide/kg body weight (i.v.). Their survival was monitored for a 7-day period. Eighteen hours after the CLP, HDLs were isolated from the plasma by sequential ultracentrifugation, as previously described (1.5 mL of plasma from five mice was used to make one HDL preparation), and the total HDL cholesterol was measured with a Wako Diagnostics kit. For P. aeruginosa -induced pneumonia sepsis, the mice were administered intranasally with 1 × 10 7 cfu of P. aeruginosa in 50 µL of PBS. Two hours later, the mice were treated with/without sHDLs. Their survival was monitored for a 7-day period. The animals were bred at the University of Kentucky’s animal facility. The animals were fed a standard laboratory diet and kept in a 10/14 h light/dark cycle. Mouse tail DNA was used for PCR genotyping. The animal care and experiments were approved by the Institutional Animal Care and Use Committee of the University of Kentucky (protocol title: “Synthetic high density lipoprotein (sHDL) as a therapy for sepsis”; protocol code: 2020-3513; and date of approval: 19 April 2023). 2.3.2. Analysis of NF-κB Expression in HEK-Blue Cells HEK-Blue cells expressing TLR4 or TLR2 and an NF-κB reporter were used to assess ligand-stimulated NF-κB activation. The cells were cultured to 70% confluency and then treated with LPSs (K12, 1 ng/mL) or LTA (40 ng/mL) in the presence/absence of sHDLs for 16 h. The culture medium (100 μL) was mixed with 100 μL of HEK-Blue Detection, and the activation of the NF-κB reporter was quantified by measuring the absorption at 650 nm. 2.3.3. Analysis of Cytokine Production by RAW264.7 Cells RAW 264.7 cells were cultured to 80% confluency and then treated with LPSs (K12, 2 ng/mL) in the presence of sHDLs (0, 15, 30, 60, or 120 µg peptide/mL) for 18 h. The concentrations of TNF-α secreted by the cells into the cell culture medium were measured by ELISA. 2.4. Statistical Analysis The data are presented as the means ± SEM or the means ± SD, as indicated in the figure legends. The statistical significance in experiments comparing two groups was determined by a two-tailed Student’s t -test. The comparison of more than two groups was evaluated by a one-way ANOVA, which was followed by Tukey’s post hoc analysis. Means were considered statistically significantly different when p < 0.05. The survival was analyzed by the log-rank test and Kaplan–Meier plots. The experimental data were statistically evaluated with the GraphPad Prism 9.
2.1.1. Coarse-Grained (CG) Model of YGZL Peptides In constructing the CG model of YGZL peptides, the process began by mimicking the repeated α-helical fragment of ApoA-I. The protein builder function of PyMol was used to construct a standard α-helical secondary structure with the designed amino acid sequence as an initial atomistic model. This atomistic model was then transformed into a CG model, utilizing the MARTINI force field designed specifically for the CG system . 2.1.2. CG Model of Solvent Molecules, Ions, Sphingomyelin, and 1,2-Dipalmitoyllecithin The CG model for solvent molecules represented four water molecules as a single CG bead. In the case of ions, they were modeled as charged CG beads, with their first hydration shell being implicitly included . For the lipids, specifically sphingomyelin (PPCS) and 1,2-dipalmitoyllecithin (DPPC), we adopted parameters from the MARTINI force field for lipids . 2.1.3. CG Model for the LTA and LPSs Lipopolysaccharides (LPSs) and lipoteichoic acid (LTA) are complex glycophospholipids with distinct structures that play important roles in sepsis. LPSs consist of three main components: a polysaccharide O-antigen, a core oligosaccharide, and a glycolipid moiety known as lipid A . The lipid A component is critical, as it mediates the proinflammatory and cytotoxic effects of LPSs , effectively making it the core moiety of LPSs. On the other hand, the structure of LTA varies among different Gram-positive bacterial species. It typically includes long chains of ribitol or glycerol phosphate, but a glycolipid moiety is a common feature for membrane anchoring , serving as the core moiety of LTA. So, we focused solely on the glycolipid moieties of LPSs and LTA, disregarding their more variable components, which allowed for a more streamlined analysis. Parameterization for LTA was based on the parameters of galactosyldiacylglycerol (DGDG) from the MARTINI force field for glycolipids . We added an additional Q a bead to represent the extra phosphate group in LTA, with the bond and angle parameters sourced from the phosphatidylinositol (PI) parameters in the same force field. For LPSs, the parameterization process involved iteratively fitting the CG model to the all-atomistic model. This approach aligned with the protocol used in Lopez’s research, ensuring a rigorous and accurate modeling process . 2.1.4. Self-Assembly of sHDL Nanoparticles In the assembly of sHDL nanoparticles, we started by constructing the initial model of the simulation system for YGZL peptides. This involved a random distribution of 20 YGZL peptides, 75 PPCS molecules, and 75 DPPC molecules, following a 1:3.75:3.75 molar ratio, which aligned with the composition of the reported HDL-like ETC642 particle. Each system contained 20 α-helical peptides, equivalent to the number of α-helices in two Apo A-I lipoproteins according to the double-belt model, along with 150 lipids. This composition approximated the reported model of discoidal HDL particles [ , , , ]. For a control, a lipid-only particle simulation system was also constructed. This system comprised a random distribution of 75 PPCS molecules and 75 DPPC molecules, without the addition of peptides. Subsequently, both systems were supplemented with CG beads representing water and ions (0.15 M NaCl), creating a physiological environment. The simulations were performed for a total duration of 3 microseconds with a 30 fs integration time step at a temperature of 323 K with an NPT ensemble, using the GROMACS 2016.1 software . This elevated temperature in the CG simulation was chosen to replicate the results of all-atom simulations typically conducted at 300–315 K [ , , ]. 2.1.5. Modeling the Binding Model of sHDL Nanoparticles with LPSs/LTA To simulate the binding conformation of LTA/LPSs with sHDL particles, our approach involved randomly adding a single molecule of LTA or LPSs into the solvent region of the simulation system, either with the assembled sHDL particle or the lipid-only particle (serving as a control). In this process, any solvent beads overlapping with the LTA/LPS molecules were removed. Additionally, the introduction of the LTA/LPS charge was counterbalanced by adding Na + ions. These systems were then subjected to simulations for a total duration of 1 microsecond each, with a 30 fs integration time step at 323 K, conducted using the GROMACS 2016.1 software. 2.1.6. Calculation of Binding Free Energies of sHDL Nanoparticles with LPSs/LTA The binding free energy between LTA/LPSs and sHDL particles was estimated through potential of mean force (PMF) calculations. This process was executed by utilizing the pull code and the weighted histogram analysis method (WHAM) , as implemented in the GROMACS software. The procedure started with the extraction of reaction coordinates from the previously equilibrated LPS/LTA binding simulation systems. This involved pulling the center of mass (COM) of the LPS/LTA molecule 8 nm away from the COM of the sHDL particle across an 80 ns simulation with a 30 fs time step. Following this, 40 sampling windows were set for an additional 10 ns of equilibration with a 30 fs time step and another 10 ns of umbrella sampling at a finer 5 fs time step. These steps were based on the reaction coordinates, spaced at 0.2 nm intervals. To ensure the reliability of the results, the binding free energy of the LPSs/LTA with the lipid-only particle was also estimated, following the same methodological approach for control purposes. Furthermore, to average out any potential fluctuations and enhance the accuracy of the findings, three independent PMF calculations were conducted for each system.
In constructing the CG model of YGZL peptides, the process began by mimicking the repeated α-helical fragment of ApoA-I. The protein builder function of PyMol was used to construct a standard α-helical secondary structure with the designed amino acid sequence as an initial atomistic model. This atomistic model was then transformed into a CG model, utilizing the MARTINI force field designed specifically for the CG system .
The CG model for solvent molecules represented four water molecules as a single CG bead. In the case of ions, they were modeled as charged CG beads, with their first hydration shell being implicitly included . For the lipids, specifically sphingomyelin (PPCS) and 1,2-dipalmitoyllecithin (DPPC), we adopted parameters from the MARTINI force field for lipids .
Lipopolysaccharides (LPSs) and lipoteichoic acid (LTA) are complex glycophospholipids with distinct structures that play important roles in sepsis. LPSs consist of three main components: a polysaccharide O-antigen, a core oligosaccharide, and a glycolipid moiety known as lipid A . The lipid A component is critical, as it mediates the proinflammatory and cytotoxic effects of LPSs , effectively making it the core moiety of LPSs. On the other hand, the structure of LTA varies among different Gram-positive bacterial species. It typically includes long chains of ribitol or glycerol phosphate, but a glycolipid moiety is a common feature for membrane anchoring , serving as the core moiety of LTA. So, we focused solely on the glycolipid moieties of LPSs and LTA, disregarding their more variable components, which allowed for a more streamlined analysis. Parameterization for LTA was based on the parameters of galactosyldiacylglycerol (DGDG) from the MARTINI force field for glycolipids . We added an additional Q a bead to represent the extra phosphate group in LTA, with the bond and angle parameters sourced from the phosphatidylinositol (PI) parameters in the same force field. For LPSs, the parameterization process involved iteratively fitting the CG model to the all-atomistic model. This approach aligned with the protocol used in Lopez’s research, ensuring a rigorous and accurate modeling process .
In the assembly of sHDL nanoparticles, we started by constructing the initial model of the simulation system for YGZL peptides. This involved a random distribution of 20 YGZL peptides, 75 PPCS molecules, and 75 DPPC molecules, following a 1:3.75:3.75 molar ratio, which aligned with the composition of the reported HDL-like ETC642 particle. Each system contained 20 α-helical peptides, equivalent to the number of α-helices in two Apo A-I lipoproteins according to the double-belt model, along with 150 lipids. This composition approximated the reported model of discoidal HDL particles [ , , , ]. For a control, a lipid-only particle simulation system was also constructed. This system comprised a random distribution of 75 PPCS molecules and 75 DPPC molecules, without the addition of peptides. Subsequently, both systems were supplemented with CG beads representing water and ions (0.15 M NaCl), creating a physiological environment. The simulations were performed for a total duration of 3 microseconds with a 30 fs integration time step at a temperature of 323 K with an NPT ensemble, using the GROMACS 2016.1 software . This elevated temperature in the CG simulation was chosen to replicate the results of all-atom simulations typically conducted at 300–315 K [ , , ].
To simulate the binding conformation of LTA/LPSs with sHDL particles, our approach involved randomly adding a single molecule of LTA or LPSs into the solvent region of the simulation system, either with the assembled sHDL particle or the lipid-only particle (serving as a control). In this process, any solvent beads overlapping with the LTA/LPS molecules were removed. Additionally, the introduction of the LTA/LPS charge was counterbalanced by adding Na + ions. These systems were then subjected to simulations for a total duration of 1 microsecond each, with a 30 fs integration time step at 323 K, conducted using the GROMACS 2016.1 software.
The binding free energy between LTA/LPSs and sHDL particles was estimated through potential of mean force (PMF) calculations. This process was executed by utilizing the pull code and the weighted histogram analysis method (WHAM) , as implemented in the GROMACS software. The procedure started with the extraction of reaction coordinates from the previously equilibrated LPS/LTA binding simulation systems. This involved pulling the center of mass (COM) of the LPS/LTA molecule 8 nm away from the COM of the sHDL particle across an 80 ns simulation with a 30 fs time step. Following this, 40 sampling windows were set for an additional 10 ns of equilibration with a 30 fs time step and another 10 ns of umbrella sampling at a finer 5 fs time step. These steps were based on the reaction coordinates, spaced at 0.2 nm intervals. To ensure the reliability of the results, the binding free energy of the LPSs/LTA with the lipid-only particle was also estimated, following the same methodological approach for control purposes. Furthermore, to average out any potential fluctuations and enhance the accuracy of the findings, three independent PMF calculations were conducted for each system.
The peptide was synthesized by Genscript and the purity was determined to be >95% by an HPLC analysis. Egg sphingomyelin (SM) and 1,2-dipalmitoyl- sn -glycero-3-phosphocholine (DPPC) were purchased from Sigma Aldrich (St. Louis, MO, USA). LPSs ( E. coli K12 ) were purchased from InvivoGen (San Diego, CA, USA). All other reagents were obtained from commercial suppliers and were of analytical grade or higher. sHDL Preparation The sHDL nanoparticles were made by co-lyophilization followed by thermal cycling, as described previously . Briefly, the peptide and phospholipids were combined and dissolved in glacial acetic acid or chloroform at a peptide/SM/DPPC ratio of 1:1:1 by weight. The resulting solution underwent rapid freezing in liquid nitrogen, which was followed by freeze-drying overnight to remove the acid. The lyophilized powder was reconstituted in 1X phosphate-buffered saline (PBS) to the desired final peptide concentration and vortexed to completely dissolve it, forming a cloudy white suspension. The resulting solution was subjected to three heat–cool cycles, with each cycle consisting of 10 min of heating at 55 °C and 10 min of cooling at room temperature, at which point a clear solution was formed. The pH of the final sHDL solution was adjusted to 7.4 with NaOH and was then passed through a 0.2 µm sterile filter.
The sHDL nanoparticles were made by co-lyophilization followed by thermal cycling, as described previously . Briefly, the peptide and phospholipids were combined and dissolved in glacial acetic acid or chloroform at a peptide/SM/DPPC ratio of 1:1:1 by weight. The resulting solution underwent rapid freezing in liquid nitrogen, which was followed by freeze-drying overnight to remove the acid. The lyophilized powder was reconstituted in 1X phosphate-buffered saline (PBS) to the desired final peptide concentration and vortexed to completely dissolve it, forming a cloudy white suspension. The resulting solution was subjected to three heat–cool cycles, with each cycle consisting of 10 min of heating at 55 °C and 10 min of cooling at room temperature, at which point a clear solution was formed. The pH of the final sHDL solution was adjusted to 7.4 with NaOH and was then passed through a 0.2 µm sterile filter.
2.3.1. In Vivo Efficacy Analysis We used two sepsis models in this study: cecal ligation and puncture (CLP)-induced and bacterial infection-induced pneumonia. CLP (21G needle, 2/3 ligation) was performed on 10- to 12-week-old male C57BL/6J mice as described previously . Two hours after the CLP, the mice were treated with 100 µL of PBS or sHDLs at 7.5 mg peptide/kg body weight (i.v.). Their survival was monitored for a 7-day period. Eighteen hours after the CLP, HDLs were isolated from the plasma by sequential ultracentrifugation, as previously described (1.5 mL of plasma from five mice was used to make one HDL preparation), and the total HDL cholesterol was measured with a Wako Diagnostics kit. For P. aeruginosa -induced pneumonia sepsis, the mice were administered intranasally with 1 × 10 7 cfu of P. aeruginosa in 50 µL of PBS. Two hours later, the mice were treated with/without sHDLs. Their survival was monitored for a 7-day period. The animals were bred at the University of Kentucky’s animal facility. The animals were fed a standard laboratory diet and kept in a 10/14 h light/dark cycle. Mouse tail DNA was used for PCR genotyping. The animal care and experiments were approved by the Institutional Animal Care and Use Committee of the University of Kentucky (protocol title: “Synthetic high density lipoprotein (sHDL) as a therapy for sepsis”; protocol code: 2020-3513; and date of approval: 19 April 2023). 2.3.2. Analysis of NF-κB Expression in HEK-Blue Cells HEK-Blue cells expressing TLR4 or TLR2 and an NF-κB reporter were used to assess ligand-stimulated NF-κB activation. The cells were cultured to 70% confluency and then treated with LPSs (K12, 1 ng/mL) or LTA (40 ng/mL) in the presence/absence of sHDLs for 16 h. The culture medium (100 μL) was mixed with 100 μL of HEK-Blue Detection, and the activation of the NF-κB reporter was quantified by measuring the absorption at 650 nm. 2.3.3. Analysis of Cytokine Production by RAW264.7 Cells RAW 264.7 cells were cultured to 80% confluency and then treated with LPSs (K12, 2 ng/mL) in the presence of sHDLs (0, 15, 30, 60, or 120 µg peptide/mL) for 18 h. The concentrations of TNF-α secreted by the cells into the cell culture medium were measured by ELISA.
We used two sepsis models in this study: cecal ligation and puncture (CLP)-induced and bacterial infection-induced pneumonia. CLP (21G needle, 2/3 ligation) was performed on 10- to 12-week-old male C57BL/6J mice as described previously . Two hours after the CLP, the mice were treated with 100 µL of PBS or sHDLs at 7.5 mg peptide/kg body weight (i.v.). Their survival was monitored for a 7-day period. Eighteen hours after the CLP, HDLs were isolated from the plasma by sequential ultracentrifugation, as previously described (1.5 mL of plasma from five mice was used to make one HDL preparation), and the total HDL cholesterol was measured with a Wako Diagnostics kit. For P. aeruginosa -induced pneumonia sepsis, the mice were administered intranasally with 1 × 10 7 cfu of P. aeruginosa in 50 µL of PBS. Two hours later, the mice were treated with/without sHDLs. Their survival was monitored for a 7-day period. The animals were bred at the University of Kentucky’s animal facility. The animals were fed a standard laboratory diet and kept in a 10/14 h light/dark cycle. Mouse tail DNA was used for PCR genotyping. The animal care and experiments were approved by the Institutional Animal Care and Use Committee of the University of Kentucky (protocol title: “Synthetic high density lipoprotein (sHDL) as a therapy for sepsis”; protocol code: 2020-3513; and date of approval: 19 April 2023).
HEK-Blue cells expressing TLR4 or TLR2 and an NF-κB reporter were used to assess ligand-stimulated NF-κB activation. The cells were cultured to 70% confluency and then treated with LPSs (K12, 1 ng/mL) or LTA (40 ng/mL) in the presence/absence of sHDLs for 16 h. The culture medium (100 μL) was mixed with 100 μL of HEK-Blue Detection, and the activation of the NF-κB reporter was quantified by measuring the absorption at 650 nm.
RAW 264.7 cells were cultured to 80% confluency and then treated with LPSs (K12, 2 ng/mL) in the presence of sHDLs (0, 15, 30, 60, or 120 µg peptide/mL) for 18 h. The concentrations of TNF-α secreted by the cells into the cell culture medium were measured by ELISA.
The data are presented as the means ± SEM or the means ± SD, as indicated in the figure legends. The statistical significance in experiments comparing two groups was determined by a two-tailed Student’s t -test. The comparison of more than two groups was evaluated by a one-way ANOVA, which was followed by Tukey’s post hoc analysis. Means were considered statistically significantly different when p < 0.05. The survival was analyzed by the log-rank test and Kaplan–Meier plots. The experimental data were statistically evaluated with the GraphPad Prism 9.
3.1. An Efficient Computational Approach to the Design of Novel sHDL Nanoparticles Concerning our general computational strategy, it is known that amphipathic α-helical proteins, like ApoA-I, have the unique ability to self-assemble around a cylindrical lipid bilayer, forming discoidal lipid–protein particles known as nanodiscs . The most widely accepted structural model for nanodiscs proposes that two amphipathic α-helical proteins wrap around the lipid bilayer in a double-belt configuration . This model has gained support from various experimental methods [ , , ]. Intriguingly, altering the sequence length of the amphipathic α-helical proteins encircling the lipid bilayer can change the size of the nanodiscs without affecting their discoidal shape . Moreover, short α-helical analog peptides derived from the native amphipathic sequence of ApoA-I can also form a discoidal peptide–lipid complex , similar in size to nanodiscs composed of native ApoA-I and lipids . This suggests that sHDL nanoparticles, made from short amphipathic α-helical peptides and lipids, are likely to form nanodisc-like structures in an aqueous solution . Our computational design of an sHDL nanoparticle with potentially improved binding affinities for LPSs and/or LTA consisted of three steps (see the for the computational details): (1) Simulate the dynamically stable 3D structures of various sHDL nanoparticles associated with different peptides by performing molecular dynamics (MD) simulations. Notably, for the MD simulation of sHDL nanoparticles, all-atomistic simulations are usually constrained to the nanosecond timescale due to the large system size, which is inadequate for studying nanoparticle self-assembly . Therefore, we opted to use a coarse-grained (CG) model for the MD simulations. The CG model has been effectively used in previous studies to investigate the assembly of lipoprotein particles and permits MD simulations on the microsecond timescale [ , , , ], offering a more practical and cost-effective approach for simulating sHDL systems. The same CG model was also employed in our subsequent CG MD simulations mentioned below. (2) Simulate the dynamically stable sHDL-ligand binding structure by performing the CG MD simulation for each sHDL nanoparticle binding with a ligand (the LPSs or LTA concerned in this study). (3) Estimate the binding free energy of each ligand with a nanoparticle by performing potential of mean force (PMF) calculations based on the CG MD simulations. This three-step computational approach enabled us to design dynamically stable 3D structures of various sHDL nanoparticles and predict the binding free energies of LPSs and LTA with various sHDL nanoparticles associated with different peptide choices. Particularly, considering the advantage of our computational approach, our choices of peptides were not limited to the sequence of the Apo A-I protein. As depicted in A, the dimer of the Apo A-I protein contains a lot of paired positively and negatively charged residues. We speculated that these pairs of charged residues would be very important for the formation of a parallel dimer structure as well as the function of ApoA-I. However, the ESP-24218 peptide of ETC642 ( C), derived from a consensus sequence of ApoA-I ( B), did not thoroughly consider the interaction of charged residue pairs. Therefore, based on the sequence of ESP-24218, we optimized the charged residues to obtain YGZL1 ( D) and YGZL2 ( E). Further, we tried to use another apolipoprotein of HDLs, ApoE, to adjust the ESP-24218 more substantially, aiming to explore analogues based on different sequences of ApoA-I and ApoE. After optimizing the matching of charged residue pairs, we designed YGZL3 ( F) and YGZL4 ( G) starting from the ApoE sequence. Further, the CG MD simulations on these peptide-based sHDL nanoparticles revealed that both YGZL-based sHDL nanoparticles and lipid-only nanoparticles form similar nanodisc structures after self-assembly, as illustrated in . In the sHDL structure, the helical peptides are parallelly aligned around the edge of the lipid bilayer. This arrangement is akin to the parallel alignment of amphipathic ApoA-I proteins in the double-belt conformation. This results in the hydrophilic groups on the phospholipids being more tightly packed on both sides of the sHDL nanodisc. In contrast, lipid-only nanoparticles have hydrophilic groups on the phospholipids that are more loosely packed. Although both sHDL and lipid-only nanoparticles captured LPSs/LTA during the self-assembly, the PMF calculations demonstrated increasing binding affinities (with the lower binding free energies) of LPSs and LTA to the YGZL-based sHDL particles, compared to the corresponding lipid-only nanoparticles. Based on the CG MD simulations and the PMF binding free energy calculations, we were able to predict the binding free energies of the sHDL nanoparticles for LPSs and LTA (see for the predicted binding free energies). Specifically, according to the computational data, the YGZL3-based sHDL was predicted to be the most promising sHDL with the highest binding affinities (or lowest binding free energies) for both LPSs and LTA. 3.2. Targeting HDLs with Synthetic HDLs (sHDLs) for Sepsis Therapy Based on a computational prediction, we prepared the sHDL nanoparticles and tested their effects in a CLP-induced sepsis model. To evaluate the therapeutic effect, we treated the septic mice two hours post-CLP. As shown in A–D, the sHDLs displayed different protective effects against CLP-induced death. YGZL2 and YGZL3 significantly improved the 7-day survival in CLP-treated mice and YGZL3 showed the best protection. As shown in E, the sHDL (YGZL3) treatment increased the plasma HDL cholesterol levels. We also tested sHDLs (YGZL3) in P. aeruginosa -induced sepsis—a more clinically relevant bacterial pneumonia sepsis model. The sHDL (YGZL3) treatment significantly protected the mice from P. aeruginosa -induced death (50% survival in YGZL3-treated mice compared to 0% survival in PBS-treated mice) ( F). 3.3. sHDLs Suppress Inflammatory Response The computational design predicted the binding of sHDLs to endotoxins and the regulation of endotoxin-induced inflammation. We validated this by testing the activity of YGZL1 and YGZL3 in regulating the endotoxin-induced inflammatory response in culture cells. YGZL1 and YGZL3 were the least and most effective sHDLs regarding protection against sepsis, respectively. We first investigated the regulation of inflammatory signaling using HEK-Blue cells that were stably transfected to express either human TLR4 or TLR2 with an NF-κB reporter. The cells were challenged with the corresponding receptor ligands (LPS/TLR4, A,B; LTA/TLR2, C,D) in the presence or absence of various concentrations of YGZL1 or YGZL3. In all cases, we observed a dose-dependent decrease in NF-κB activation with increasing sHDL concentrations. We also tested the effect of sHDLs in microphages (RAW-276 cells). As shown in E,F, both YGZL1 and YGZL3 effectively suppressed the TNF-α production induced by LPSs. Of note, YGZL1 and YGZL3 showed different protective abilities against septic death, but both effectively suppressed the endotoxin-induced inflammatory response. This suggests that sHDLs have activity other than regulating the endotoxin-induced inflammatory response. Given the multiple protective activities of HDLs, further studies are required to determine the mechanism underlying the protection against sepsis by YGZL3.
Concerning our general computational strategy, it is known that amphipathic α-helical proteins, like ApoA-I, have the unique ability to self-assemble around a cylindrical lipid bilayer, forming discoidal lipid–protein particles known as nanodiscs . The most widely accepted structural model for nanodiscs proposes that two amphipathic α-helical proteins wrap around the lipid bilayer in a double-belt configuration . This model has gained support from various experimental methods [ , , ]. Intriguingly, altering the sequence length of the amphipathic α-helical proteins encircling the lipid bilayer can change the size of the nanodiscs without affecting their discoidal shape . Moreover, short α-helical analog peptides derived from the native amphipathic sequence of ApoA-I can also form a discoidal peptide–lipid complex , similar in size to nanodiscs composed of native ApoA-I and lipids . This suggests that sHDL nanoparticles, made from short amphipathic α-helical peptides and lipids, are likely to form nanodisc-like structures in an aqueous solution . Our computational design of an sHDL nanoparticle with potentially improved binding affinities for LPSs and/or LTA consisted of three steps (see the for the computational details): (1) Simulate the dynamically stable 3D structures of various sHDL nanoparticles associated with different peptides by performing molecular dynamics (MD) simulations. Notably, for the MD simulation of sHDL nanoparticles, all-atomistic simulations are usually constrained to the nanosecond timescale due to the large system size, which is inadequate for studying nanoparticle self-assembly . Therefore, we opted to use a coarse-grained (CG) model for the MD simulations. The CG model has been effectively used in previous studies to investigate the assembly of lipoprotein particles and permits MD simulations on the microsecond timescale [ , , , ], offering a more practical and cost-effective approach for simulating sHDL systems. The same CG model was also employed in our subsequent CG MD simulations mentioned below. (2) Simulate the dynamically stable sHDL-ligand binding structure by performing the CG MD simulation for each sHDL nanoparticle binding with a ligand (the LPSs or LTA concerned in this study). (3) Estimate the binding free energy of each ligand with a nanoparticle by performing potential of mean force (PMF) calculations based on the CG MD simulations. This three-step computational approach enabled us to design dynamically stable 3D structures of various sHDL nanoparticles and predict the binding free energies of LPSs and LTA with various sHDL nanoparticles associated with different peptide choices. Particularly, considering the advantage of our computational approach, our choices of peptides were not limited to the sequence of the Apo A-I protein. As depicted in A, the dimer of the Apo A-I protein contains a lot of paired positively and negatively charged residues. We speculated that these pairs of charged residues would be very important for the formation of a parallel dimer structure as well as the function of ApoA-I. However, the ESP-24218 peptide of ETC642 ( C), derived from a consensus sequence of ApoA-I ( B), did not thoroughly consider the interaction of charged residue pairs. Therefore, based on the sequence of ESP-24218, we optimized the charged residues to obtain YGZL1 ( D) and YGZL2 ( E). Further, we tried to use another apolipoprotein of HDLs, ApoE, to adjust the ESP-24218 more substantially, aiming to explore analogues based on different sequences of ApoA-I and ApoE. After optimizing the matching of charged residue pairs, we designed YGZL3 ( F) and YGZL4 ( G) starting from the ApoE sequence. Further, the CG MD simulations on these peptide-based sHDL nanoparticles revealed that both YGZL-based sHDL nanoparticles and lipid-only nanoparticles form similar nanodisc structures after self-assembly, as illustrated in . In the sHDL structure, the helical peptides are parallelly aligned around the edge of the lipid bilayer. This arrangement is akin to the parallel alignment of amphipathic ApoA-I proteins in the double-belt conformation. This results in the hydrophilic groups on the phospholipids being more tightly packed on both sides of the sHDL nanodisc. In contrast, lipid-only nanoparticles have hydrophilic groups on the phospholipids that are more loosely packed. Although both sHDL and lipid-only nanoparticles captured LPSs/LTA during the self-assembly, the PMF calculations demonstrated increasing binding affinities (with the lower binding free energies) of LPSs and LTA to the YGZL-based sHDL particles, compared to the corresponding lipid-only nanoparticles. Based on the CG MD simulations and the PMF binding free energy calculations, we were able to predict the binding free energies of the sHDL nanoparticles for LPSs and LTA (see for the predicted binding free energies). Specifically, according to the computational data, the YGZL3-based sHDL was predicted to be the most promising sHDL with the highest binding affinities (or lowest binding free energies) for both LPSs and LTA.
Based on a computational prediction, we prepared the sHDL nanoparticles and tested their effects in a CLP-induced sepsis model. To evaluate the therapeutic effect, we treated the septic mice two hours post-CLP. As shown in A–D, the sHDLs displayed different protective effects against CLP-induced death. YGZL2 and YGZL3 significantly improved the 7-day survival in CLP-treated mice and YGZL3 showed the best protection. As shown in E, the sHDL (YGZL3) treatment increased the plasma HDL cholesterol levels. We also tested sHDLs (YGZL3) in P. aeruginosa -induced sepsis—a more clinically relevant bacterial pneumonia sepsis model. The sHDL (YGZL3) treatment significantly protected the mice from P. aeruginosa -induced death (50% survival in YGZL3-treated mice compared to 0% survival in PBS-treated mice) ( F).
The computational design predicted the binding of sHDLs to endotoxins and the regulation of endotoxin-induced inflammation. We validated this by testing the activity of YGZL1 and YGZL3 in regulating the endotoxin-induced inflammatory response in culture cells. YGZL1 and YGZL3 were the least and most effective sHDLs regarding protection against sepsis, respectively. We first investigated the regulation of inflammatory signaling using HEK-Blue cells that were stably transfected to express either human TLR4 or TLR2 with an NF-κB reporter. The cells were challenged with the corresponding receptor ligands (LPS/TLR4, A,B; LTA/TLR2, C,D) in the presence or absence of various concentrations of YGZL1 or YGZL3. In all cases, we observed a dose-dependent decrease in NF-κB activation with increasing sHDL concentrations. We also tested the effect of sHDLs in microphages (RAW-276 cells). As shown in E,F, both YGZL1 and YGZL3 effectively suppressed the TNF-α production induced by LPSs. Of note, YGZL1 and YGZL3 showed different protective abilities against septic death, but both effectively suppressed the endotoxin-induced inflammatory response. This suggests that sHDLs have activity other than regulating the endotoxin-induced inflammatory response. Given the multiple protective activities of HDLs, further studies are required to determine the mechanism underlying the protection against sepsis by YGZL3.
In this study, we developed a validated three-step computational approach to simulate the dynamically stable binding of large nanoparticles, such as sHDLs, with various ligands, including endotoxins. This approach allowed us to predict their binding free energies and design a novel type of more potent sHDL nanoparticles. These nanoparticles can significantly increase the overall plasma HDL levels and effectively suppress sepsis-related inflammatory signaling. Using two mouse models of sepsis, we demonstrated for the first time that an ApoE-based novel type of sHDL nanoparticle provides effective protection against sepsis. We tested and employed a three-step computational approach to design new sHDL nanoparticles based on our speculation that pairs of charged residues would be important for forming a parallel dimer structure as well as for the function of ApoA-I. Through this new approach, we designed novel types of sHDL nanoparticles (particularly YGZL3) starting from the ApoE sequence. We provided 7-day survival evidence demonstrating that the sHDL YGZL3 protects against sepsis in two clinically relevant sepsis models: CLP-induced and cecal slurry-induced polymicrobial sepsis. The sHDL YGZL3 was administered after the induction of sepsis and not as a preventive measure. Thus, the sHDL YGZL3 may serve as a potentially effective therapy for sepsis. This research is innovative because, unlike earlier sHDLs made of ApoA-I mimetic peptides, we utilized computer modeling and simulation-based binding free energy predictions to generate a novel type of sHDL based on ApoE—another major component of HDLs. We demonstrated for the first time that an ApoE-based novel sHDL (YGZL3) effectively protected against septic death in two clinically relevant sepsis models. Our computational simulations indicate that the ApoE mimetic peptide bound with phospholipids to form stable nanoparticles, making this sHDL (YGZL3) more effective than the first generation of sHDLs made from the naked peptide. Additionally, the general concept of the three-step computational approach may be used to simulate the dynamically stable binding of other large nanoparticles with any ligands and predict their binding free energies for the computational design of novel nanoparticles.
We developed a new approach that employed computational simulations to design a new type of sHDL based on HDL’s structure and function. We found that YGZL3, an ApoE-sequence-based sHDL, provided effective protection against sepsis in two mouse models.
|
Effectiveness of art-based health education on anemia and health literacy among pregnant women in Western Nepal: A randomized controlled trial | 57ac4604-55f6-4ba4-acdc-4d8f4afa8c93 | 11441646 | Health Literacy[mh] | Health literacy (HL) has been defined as “the cognitive and social skills that determine the motivation and ability of individuals to gain access to, understand, and use information in ways that promote and maintain good health” . In developing countries, many social determinants of health affect people’s lives, including poverty, gender inequality, educational disparities, exploitation, violence, and injustice. These factors contribute to illness and death among the poor and marginalized . In the Federal Democratic Republic of Nepal (hereinafter referred to as Nepal), there are significant health disparities due to social determinants of health, and healthcare systems and support for health maintenance and promotion are still inadequate. In particular, Nepalese women have difficulty making decisions and taking health-related actions of their own volition due to various social background-related factors, including religion, tradition, gender roles, and caste; thus, they need to be empowered . Health literacy is a personal skill that can be developed and can promote independence in health care based on the patient’s own decisions . The potential of health literacy to reduce inequalities, increase health system responsiveness, and promote the achievement of the United Nations Sustainable Development Goals is gaining attention . There is growing awareness worldwide of the ethical imperative for patients to participate in decision-making regarding the health care they receive. As patient decision-making has been shown to be effective in behavior change and health outcomes, there has also been recognition of the importance of improving patient health literacy and supporting patient self-determination . These effective health behaviors and health outcomes include reduced patient anxiety and increased patient knowledge, patient satisfaction with treatment decisions, and treatment adherence . While substantial progress has been made in many aspects of healthcare delivery in Nepal, perinatal mortality rates remain high . Moreover, malnutrition among pregnant women in Nepal has a variety of negative effects on mother and child. Anemia during pregnancy is associated with an increased risk of perinatal and maternal death, preterm delivery, and low birth weight . The lack of improvement in perinatal mortality is related to the nutritional status of pregnant women: 46% of pregnant women in Nepal have a body mass index (BMI) below 18.5, indicating a high rate of undernutrition . In addition, 40% of pregnant women in Nepal are anemic, according to the latest data, with no significant improvement in the past 20 years . Most of the previous studies on anemia and malnutrition among Nepalese pregnant women are cross-sectional studies. In addition, iron supplements have been distributed to all pregnant women as a national policy, but the anemia status of pregnant women before and after the intervention has not been assessed. Furthermore, there has been no evaluation of iron supplementation compared with a control group, and an association between low compliance to iron supplementation and high anemia rates among Nepalese pregnant women has been noted . In view of the current lack of improvement in maternal hemoglobin (Hb) levels in Nepal, the distribution of iron supplements alone is not sufficient for reducing anemia . This conclusion is suggested by the results of a study of Nepalese pregnant women with low anemia prevalence, which reported higher compliance and improved Hb levels in the education plus pill count group than in the pill count alone group. In particular, the causes of health disparities among Nepalese women are diverse, pointing to the need for long-term observation aimed at improving health literacy in addition to health outcomes when interventions for health promotion are implemented . Although we could not identify any previous studies showing that health literacy level affects maternal anemia, we aimed to assess from 14 articles whether pregnant women’s health literacy level is associated with pregnancy outcomes and whether effective interventions for improving pregnant women’s health literacy have been established. A systematic review on this topic reported that the health literacy levels of pregnant women varied widely, and low health literacy was associated with unhealthy behaviors during pregnancy . However, little is known about health literacy in Nepal. A previous study published in 2018 assessed the health literacy levels of chronically ill patients and ascertained their knowledge about their disease. The results showed that 27% of respondents had adequate, 19% had low, and 54% had inadequate health literacy levels. Factors associated with inadequate health literacy included older age, female sex, low or no education, unemployment or retirement, poverty, and history of smoking or drinking. Those with adequate health literacy understood their disease or condition more significantly than those with inadequate health literacy . Health literacy interventions consist of lectures, passive teaching, one-way delivery of information, distribution of brochures and leaflets, and health education sessions using visual aids . Traditional methods alone are inadequate for pregnant women in Nepal, who have varying literacy levels and cultural backgrounds and are reluctant to make decisions about their own health. Health education for persons with low health literacy requires the use of materials that consider the comprehension skills of the target population, and in this context, material that includes pictures and diagrams is understood significantly better . Therefore, this study aimed to evaluate the effectiveness of “face-to-face health education using educational material created with pictures, photos, and nomograms without text” to improve Hb levels in Nepalese pregnant women. Furthermore, the study sought to assess whether the intervention improved the health literacy of Nepalese pregnant women.
Ethics statement The survey was conducted with the approval of the Research Ethics Committee (No. 2018180) of the Kansai Medical University to which the principal investigator belongs, as well as the Institutional Review Committees of the Nepal Health Research Council (No. 358) and Western Regional Hospital (No. 64). This was an interventional study and required clinical trial enrollment prior to the start of the study. However, as we were unaware of the need for prospective clinical trial enrollment, this trial was enrolled retrospectively (UMIN Clinical Trials Registry, UMIN000049603; enrollment date: 11/24/2022, retrospective enrollment). The authors confirm that all ongoing and related trials for this intervention are registered. The analysis in this study was conducted according to the analysis plan presented in the study protocol. The study participants were pregnant women who understood the study and provided written consent before enrollment. Nepalese research nurses, who had received prior training and understood the purpose and content of the study, administered the survey using a Nepali questionnaire with an ID number. The questions and answer choices in the questionnaire were verbally explained (Nepali) to the participants by the research nurse, and the verbal (Nepali) responses were transcribed onto the survey form. If the participants were aged <20 years, consent was obtained from both the participants and their guardians. Easily understandable explanations were provided for participants from whom informed consent was difficult to obtain. Study design This study employed a randomized controlled trial (RCT) design and adhered to the Consolidated Standards of Reporting Trials (CONSORT) Statement. The study used a randomized parallel-group comparison design in which a single-blind test was used to compare an education group using original educational material , a distribution group receiving only educational material, and a control group receiving no intervention from the researchers and only a routine antenatal checkup. Eligibility criteria Anemia tests were performed on 609 pregnant women with pregnancies of gestational age 8–12 weeks who consented to the study, and 201 anemic pregnant women with Hb levels of <11.0 g/dl were screened. Inclusion criteria for screening and consent were as follows: (1) pregnant women aged 16 years or older who were able to give informed consent, (2) an Hb concentration of 7.0–10.9 g/dl, (3) presence of a live single fetus in utero, (4) gestational age of 8–12 weeks at study entry, (5) no underlying disease requiring regular oral medication, (6) no cardiovascular disease, autoimmune disease, or any other condition affecting anemia, and (7) no condition that, in the opinion of the consenting health care professional, required exclusion from the study. An obstetrician, a co-investigator at the collaborating institution, verified the eligibility of the participants. Five research nurses explained the study to women diagnosed with gestational anemia and received a signed research consent form. Randomization and allocation The participants were assigned to one of three groups by block randomization according to a six-block, computer-generated randomization order created by a statistician not involved in the study. Research nurses then administered the surveys and interventions according to the randomization list. The five research nurses who administered the surveys and interventions were aware of the group allocations but were blinded to all participants and to the analysts of the survey results. The data were statistically analyzed using IBM SPSS Statistics, version 28.0 (IBM Japan, Tokyo, Japan) . Study area and participants Participants were recruited from pregnant Nepalese women who came to Western Regional Hospital for prenatal care. The Western Regional Hospital is the only public general hospital in Pokhara, the second largest city in Nepal, located approximately 200 km west of the capital Kathmandu. Furthermore, it has the highest number of deliveries among the facilities in Pokhara. Pregnant women from surrounding areas (urban, mountain, and rural) use this hospital, and it was chosen as a suitable location to study pregnant women with social determinants of health that contribute to health disparities. The date range for participant recruitment and follow-up survey was from March 2019 to March 2020. Baseline and follow-up surveys All participants completed a baseline survey (questionnaire including Hb levels, socioeconomic status, health status, health literacy scale, and height and weight measurements) at the first parental checkup. A follow-up survey (including Hb levels, confirmation of iron medication, and health literacy scale) was conducted at 36–40 weeks of gestation. Data were collected by research nurses through face-to-face interviews using the questionnaire. Five research nurses who had received prior training from the principal investigator and co-investigators on how to conduct interviews using the questionnaire administered the survey. Hb measurements were performed by laboratory technicians at collaborating facilities using the same equipment and analytical methods. Intervention The education group underwent three individual health education sessions during all gestational periods. Health education was provided by Nepalese research nurses trained by the principal investigator and co-investigators. Participants received three health education sessions: at the 8–12 weeks baseline survey, 20–24 weeks prenatal checkup, and 30–34 weeks prenatal checkup. The health education lasted approximately 10 minutes per session and was conducted face-to-face. Original text-free material consisting of pictures, photographs, and nomograms was used in the health education sessions. Pregnant women in the distribution group received only the original educational material and did not receive individualized health education. Pregnant women in the control group received a general prenatal health examination; however, they received the original educational material in the third trimester (36–40 weeks) to ensure that they were not disadvantaged. We also developed a teaching manual for the research nurses who conducted the education sessions. The manual described perinatology, nutrition, lifestyle, and recipes. The opinions of two Nepalese obstetricians were also incorporated to determine whether the text matched the description of the original educational material and whether the content was appropriate for education targeting Nepalese pregnant women. The face-to-face intervention corresponded to the level of understanding, amount of knowledge, and health concerns of pregnant women and included the following items: medical knowledge about anemia among pregnant women, the effects of anemia on mother and child, the importance of iron supplementation, Nepali women’s food culture and nutritional imbalances, junk food consumption, iron-rich food consumption and cooking methods to prevent and improve anemia, and precautions and side effects when taking iron pills and how to deal with them. As part of the nutrition education program, we used illustrations and photos to introduce menus and recipes that improve anemia using inexpensive and locally available food items. Data collection Baseline study Information on social background relating to health determinants was obtained in a baseline study. Questions included family name to identify ethnicity and caste; age (date of birth); gestational age; place of residence; religion; age at marriage; family structure; pregnancy history; number of children; birth interval; employment of mother and husband; occupation of mother, husband, and father; household income; education of mother and husband; and literacy of mother and husband. Questions on health status included underlying medical conditions and pregnancy complications. Questions regarding daily life included burden (working hours and sense of burden); husband’s interest in and understanding of prenatal care; smoking and drinking; exposure to secondhand smoke; frequency of meals; frequency of meat, fish, and legume consumption; age; treatment history of various infectious diseases, including HIV and AIDS; and iron supplementation. The questionnaire was developed by the principal investigator with reference to previous studies and was translated into Nepali. Intervention follow-up study A follow-up study was conducted in all groups. Questionnaires were administered during prenatal checkups at a gestational age of 36–40 weeks. The questionnaire consisted of the 14-item Health Literacy Scale (HLS-14) and iron medication status. Hb levels were also tested. Primary endpoint Hemoglobin level (g/dl) Blood samples were collected by research nurses and analyzed at laboratories in cooperating research facilities for the assessment of blood Hb levels. An Hb cutoff value (according to the World Health Organization [WHO]) of <11.0 g/dl was defined as anemia in pregnant women. Pregnant women with anemia were screened by conducting a baseline study of pregnant women with pregnancies of gestational age 8–12 weeks. Women diagnosed as anemic during this screening were recruited for the intervention study. Hb levels were assessed again at 36–40 weeks of gestational age as a follow-up study. The primary outcome measure was the change in Hb levels between baseline and follow-up studies in the three groups. Secondary endpoints Health literacy The HLS-14 developed by Suka et al. was used as a measure of health literacy in Japanese patients. The HLS-14 is a comprehensive health literacy scale consisting of 14 items (5 functional literacy items on a 25-point scale, 5 communicative literacy items on a 25-point scale, and 4 critical literacy items on a 20-point scale), with a 70-point scale. Functional literacy refers to the basic skills of reading and writing to function effectively in daily life. Communicative literacy refers to more advanced skills for actively participating in daily life, obtaining information, understanding health status from various forms of communication and applying new information to changing health conditions. Critical literacy refers to more advanced skills in critically analyzing information and using that information to better health conditions. The HLS-14 scale has demonstrated reliability and validity primarily with Japanese adults and has been validated in other countries . Development of a Nepalese version of the HLS-14 Approval for the development of the Nepalese version of the HLS-14 was obtained from the corresponding author in Japan, the developer of the HLS-14 . The equivalence of interpretations of the meaning of each item was discussed among the two Nepalese research collaborators, the Japanese principal investigator, and a Japanese researcher living in Nepal. The constructs of the original HLS-14 and their appropriateness for application among Nepalese people were discussed. Discussions were conducted in English and Nepali. To ensure equivalence of meaning and interpretation, the original questionnaire was translated directly from Japanese into Nepali. This was done by one Nepali with a master’s degree in economics, who is a native speaker of Nepali and fluent in English and Japanese, and one Japanese national with a master’s degree in public health living in Nepal. Translations were made from the Nepali translation into English and from English into Japanese by translators (one native Nepali speaker fluent in English and one native Japanese speaker fluent in English) who were unaware of the purpose of this study. It was then confirmed that the four researchers had a common understanding of each item and of the words in each item and that equivalence of meanings and items could be guaranteed. As a result, one word was modified for one item to make it more understandable. After the abovementioned process, the Nepali version of the HLS-14 questionnaire was completed. Five Nepalese research nurses who understood the main purpose of the study were asked to check the Nepali version of the HLS-14 for any inconsistencies or discrepancies in meaning or interpretation. Body mass index Height and weight measurements were taken during the first prenatal checkup (baseline study). The seca-213 portable height meter was used for height measurement, and the Tanita BC-314 (Tanita Corp., Tokyo, Japan) body composition scale was used for weight measurement. Weights were compared with three other scales to ensure that there were no errors. BMI was calculated as weight (kg)/square of height (m 2 ). The WHO cutoff values for BMI were used: underweight, <18.5; normal weight, 18.5–24.9; and overweight/obese, >25.0. Supplementation of Hemferon-S tablets All participants in the intervention and control arms received one tablet of Fefo (200 mg of dried Ferrous Sulphate + 0.40 mg Folic Acid) supplement once daily. The Government of Nepal launched the Iron Intensification Program (IIP) in 2003 , under which iron and folic acid are distributed to all pregnant women. The pregnant women are provided supplements from the first perinatal checkup until 45 days postpartum. Sample size In a previous study, 320 pregnant Nepalese women were randomly divided into four groups (education group, iron count group, education + iron count group, and control group), and the effects of health education were examined in terms of Hb levels and anemia incidence . To demonstrate the effect of health education using pictures and graphs, we estimated and calculated an effect size of 0.3571, α = 0.05, power = 0.95, and a dropout rate of 10%, with a total sample size of 138 cases and 46 cases each in the three groups (Gpower v.3.1). Analysis method Using confirmatory factor analysis, fit indices (comparative fit index, Tucker-Lewis index, and root mean square error of approximation) were calculated for 609 pregnant women to confirm the factor structure of the model of the Nepali version of the HLS scale and validate the goodness of fit. Furthermore, Cronbach’s Alpha coefficient was used to evaluate the reliability of the scale. Descriptive statistics were used to analyze the data and calculate frequencies and percentages to describe the characteristics of the study participants. Continuous variables are summarized using the mean and standard deviation (SD). Fisher’s exact test and the Chi-square test were used to analyze the associations of categorical variables among the three groups. Analysis of variance (ANOVA) was used to evaluate the primary outcome (Hb levels). To assess health literacy (secondary outcome), the Kruskal Wallis test was used to compare between the three groups on the health literacy scale, and the Wilcoxon signed rank sum test was used to calculate the change in total health literacy scale scores and subscale scores before and after the intervention. All p-values less than 0.05 were considered statistically significant. Statistical analysts analyzed data blinded to grouping. All outcomes were analyzed using all available data in an intention-to-treat framework. The data were statistically analyzed using IBM SPSS Statistics, version 28.0 (IBM Japan, Tokyo, Japan).
The survey was conducted with the approval of the Research Ethics Committee (No. 2018180) of the Kansai Medical University to which the principal investigator belongs, as well as the Institutional Review Committees of the Nepal Health Research Council (No. 358) and Western Regional Hospital (No. 64). This was an interventional study and required clinical trial enrollment prior to the start of the study. However, as we were unaware of the need for prospective clinical trial enrollment, this trial was enrolled retrospectively (UMIN Clinical Trials Registry, UMIN000049603; enrollment date: 11/24/2022, retrospective enrollment). The authors confirm that all ongoing and related trials for this intervention are registered. The analysis in this study was conducted according to the analysis plan presented in the study protocol. The study participants were pregnant women who understood the study and provided written consent before enrollment. Nepalese research nurses, who had received prior training and understood the purpose and content of the study, administered the survey using a Nepali questionnaire with an ID number. The questions and answer choices in the questionnaire were verbally explained (Nepali) to the participants by the research nurse, and the verbal (Nepali) responses were transcribed onto the survey form. If the participants were aged <20 years, consent was obtained from both the participants and their guardians. Easily understandable explanations were provided for participants from whom informed consent was difficult to obtain.
This study employed a randomized controlled trial (RCT) design and adhered to the Consolidated Standards of Reporting Trials (CONSORT) Statement. The study used a randomized parallel-group comparison design in which a single-blind test was used to compare an education group using original educational material , a distribution group receiving only educational material, and a control group receiving no intervention from the researchers and only a routine antenatal checkup.
Anemia tests were performed on 609 pregnant women with pregnancies of gestational age 8–12 weeks who consented to the study, and 201 anemic pregnant women with Hb levels of <11.0 g/dl were screened. Inclusion criteria for screening and consent were as follows: (1) pregnant women aged 16 years or older who were able to give informed consent, (2) an Hb concentration of 7.0–10.9 g/dl, (3) presence of a live single fetus in utero, (4) gestational age of 8–12 weeks at study entry, (5) no underlying disease requiring regular oral medication, (6) no cardiovascular disease, autoimmune disease, or any other condition affecting anemia, and (7) no condition that, in the opinion of the consenting health care professional, required exclusion from the study. An obstetrician, a co-investigator at the collaborating institution, verified the eligibility of the participants. Five research nurses explained the study to women diagnosed with gestational anemia and received a signed research consent form.
The participants were assigned to one of three groups by block randomization according to a six-block, computer-generated randomization order created by a statistician not involved in the study. Research nurses then administered the surveys and interventions according to the randomization list. The five research nurses who administered the surveys and interventions were aware of the group allocations but were blinded to all participants and to the analysts of the survey results. The data were statistically analyzed using IBM SPSS Statistics, version 28.0 (IBM Japan, Tokyo, Japan) .
Participants were recruited from pregnant Nepalese women who came to Western Regional Hospital for prenatal care. The Western Regional Hospital is the only public general hospital in Pokhara, the second largest city in Nepal, located approximately 200 km west of the capital Kathmandu. Furthermore, it has the highest number of deliveries among the facilities in Pokhara. Pregnant women from surrounding areas (urban, mountain, and rural) use this hospital, and it was chosen as a suitable location to study pregnant women with social determinants of health that contribute to health disparities. The date range for participant recruitment and follow-up survey was from March 2019 to March 2020.
All participants completed a baseline survey (questionnaire including Hb levels, socioeconomic status, health status, health literacy scale, and height and weight measurements) at the first parental checkup. A follow-up survey (including Hb levels, confirmation of iron medication, and health literacy scale) was conducted at 36–40 weeks of gestation. Data were collected by research nurses through face-to-face interviews using the questionnaire. Five research nurses who had received prior training from the principal investigator and co-investigators on how to conduct interviews using the questionnaire administered the survey. Hb measurements were performed by laboratory technicians at collaborating facilities using the same equipment and analytical methods.
The education group underwent three individual health education sessions during all gestational periods. Health education was provided by Nepalese research nurses trained by the principal investigator and co-investigators. Participants received three health education sessions: at the 8–12 weeks baseline survey, 20–24 weeks prenatal checkup, and 30–34 weeks prenatal checkup. The health education lasted approximately 10 minutes per session and was conducted face-to-face. Original text-free material consisting of pictures, photographs, and nomograms was used in the health education sessions. Pregnant women in the distribution group received only the original educational material and did not receive individualized health education. Pregnant women in the control group received a general prenatal health examination; however, they received the original educational material in the third trimester (36–40 weeks) to ensure that they were not disadvantaged. We also developed a teaching manual for the research nurses who conducted the education sessions. The manual described perinatology, nutrition, lifestyle, and recipes. The opinions of two Nepalese obstetricians were also incorporated to determine whether the text matched the description of the original educational material and whether the content was appropriate for education targeting Nepalese pregnant women. The face-to-face intervention corresponded to the level of understanding, amount of knowledge, and health concerns of pregnant women and included the following items: medical knowledge about anemia among pregnant women, the effects of anemia on mother and child, the importance of iron supplementation, Nepali women’s food culture and nutritional imbalances, junk food consumption, iron-rich food consumption and cooking methods to prevent and improve anemia, and precautions and side effects when taking iron pills and how to deal with them. As part of the nutrition education program, we used illustrations and photos to introduce menus and recipes that improve anemia using inexpensive and locally available food items.
Baseline study Information on social background relating to health determinants was obtained in a baseline study. Questions included family name to identify ethnicity and caste; age (date of birth); gestational age; place of residence; religion; age at marriage; family structure; pregnancy history; number of children; birth interval; employment of mother and husband; occupation of mother, husband, and father; household income; education of mother and husband; and literacy of mother and husband. Questions on health status included underlying medical conditions and pregnancy complications. Questions regarding daily life included burden (working hours and sense of burden); husband’s interest in and understanding of prenatal care; smoking and drinking; exposure to secondhand smoke; frequency of meals; frequency of meat, fish, and legume consumption; age; treatment history of various infectious diseases, including HIV and AIDS; and iron supplementation. The questionnaire was developed by the principal investigator with reference to previous studies and was translated into Nepali. Intervention follow-up study A follow-up study was conducted in all groups. Questionnaires were administered during prenatal checkups at a gestational age of 36–40 weeks. The questionnaire consisted of the 14-item Health Literacy Scale (HLS-14) and iron medication status. Hb levels were also tested.
Information on social background relating to health determinants was obtained in a baseline study. Questions included family name to identify ethnicity and caste; age (date of birth); gestational age; place of residence; religion; age at marriage; family structure; pregnancy history; number of children; birth interval; employment of mother and husband; occupation of mother, husband, and father; household income; education of mother and husband; and literacy of mother and husband. Questions on health status included underlying medical conditions and pregnancy complications. Questions regarding daily life included burden (working hours and sense of burden); husband’s interest in and understanding of prenatal care; smoking and drinking; exposure to secondhand smoke; frequency of meals; frequency of meat, fish, and legume consumption; age; treatment history of various infectious diseases, including HIV and AIDS; and iron supplementation. The questionnaire was developed by the principal investigator with reference to previous studies and was translated into Nepali.
A follow-up study was conducted in all groups. Questionnaires were administered during prenatal checkups at a gestational age of 36–40 weeks. The questionnaire consisted of the 14-item Health Literacy Scale (HLS-14) and iron medication status. Hb levels were also tested.
Hemoglobin level (g/dl) Blood samples were collected by research nurses and analyzed at laboratories in cooperating research facilities for the assessment of blood Hb levels. An Hb cutoff value (according to the World Health Organization [WHO]) of <11.0 g/dl was defined as anemia in pregnant women. Pregnant women with anemia were screened by conducting a baseline study of pregnant women with pregnancies of gestational age 8–12 weeks. Women diagnosed as anemic during this screening were recruited for the intervention study. Hb levels were assessed again at 36–40 weeks of gestational age as a follow-up study. The primary outcome measure was the change in Hb levels between baseline and follow-up studies in the three groups.
Blood samples were collected by research nurses and analyzed at laboratories in cooperating research facilities for the assessment of blood Hb levels. An Hb cutoff value (according to the World Health Organization [WHO]) of <11.0 g/dl was defined as anemia in pregnant women. Pregnant women with anemia were screened by conducting a baseline study of pregnant women with pregnancies of gestational age 8–12 weeks. Women diagnosed as anemic during this screening were recruited for the intervention study. Hb levels were assessed again at 36–40 weeks of gestational age as a follow-up study. The primary outcome measure was the change in Hb levels between baseline and follow-up studies in the three groups.
Health literacy The HLS-14 developed by Suka et al. was used as a measure of health literacy in Japanese patients. The HLS-14 is a comprehensive health literacy scale consisting of 14 items (5 functional literacy items on a 25-point scale, 5 communicative literacy items on a 25-point scale, and 4 critical literacy items on a 20-point scale), with a 70-point scale. Functional literacy refers to the basic skills of reading and writing to function effectively in daily life. Communicative literacy refers to more advanced skills for actively participating in daily life, obtaining information, understanding health status from various forms of communication and applying new information to changing health conditions. Critical literacy refers to more advanced skills in critically analyzing information and using that information to better health conditions. The HLS-14 scale has demonstrated reliability and validity primarily with Japanese adults and has been validated in other countries .
The HLS-14 developed by Suka et al. was used as a measure of health literacy in Japanese patients. The HLS-14 is a comprehensive health literacy scale consisting of 14 items (5 functional literacy items on a 25-point scale, 5 communicative literacy items on a 25-point scale, and 4 critical literacy items on a 20-point scale), with a 70-point scale. Functional literacy refers to the basic skills of reading and writing to function effectively in daily life. Communicative literacy refers to more advanced skills for actively participating in daily life, obtaining information, understanding health status from various forms of communication and applying new information to changing health conditions. Critical literacy refers to more advanced skills in critically analyzing information and using that information to better health conditions. The HLS-14 scale has demonstrated reliability and validity primarily with Japanese adults and has been validated in other countries .
Approval for the development of the Nepalese version of the HLS-14 was obtained from the corresponding author in Japan, the developer of the HLS-14 . The equivalence of interpretations of the meaning of each item was discussed among the two Nepalese research collaborators, the Japanese principal investigator, and a Japanese researcher living in Nepal. The constructs of the original HLS-14 and their appropriateness for application among Nepalese people were discussed. Discussions were conducted in English and Nepali. To ensure equivalence of meaning and interpretation, the original questionnaire was translated directly from Japanese into Nepali. This was done by one Nepali with a master’s degree in economics, who is a native speaker of Nepali and fluent in English and Japanese, and one Japanese national with a master’s degree in public health living in Nepal. Translations were made from the Nepali translation into English and from English into Japanese by translators (one native Nepali speaker fluent in English and one native Japanese speaker fluent in English) who were unaware of the purpose of this study. It was then confirmed that the four researchers had a common understanding of each item and of the words in each item and that equivalence of meanings and items could be guaranteed. As a result, one word was modified for one item to make it more understandable. After the abovementioned process, the Nepali version of the HLS-14 questionnaire was completed. Five Nepalese research nurses who understood the main purpose of the study were asked to check the Nepali version of the HLS-14 for any inconsistencies or discrepancies in meaning or interpretation.
Height and weight measurements were taken during the first prenatal checkup (baseline study). The seca-213 portable height meter was used for height measurement, and the Tanita BC-314 (Tanita Corp., Tokyo, Japan) body composition scale was used for weight measurement. Weights were compared with three other scales to ensure that there were no errors. BMI was calculated as weight (kg)/square of height (m 2 ). The WHO cutoff values for BMI were used: underweight, <18.5; normal weight, 18.5–24.9; and overweight/obese, >25.0.
All participants in the intervention and control arms received one tablet of Fefo (200 mg of dried Ferrous Sulphate + 0.40 mg Folic Acid) supplement once daily. The Government of Nepal launched the Iron Intensification Program (IIP) in 2003 , under which iron and folic acid are distributed to all pregnant women. The pregnant women are provided supplements from the first perinatal checkup until 45 days postpartum.
In a previous study, 320 pregnant Nepalese women were randomly divided into four groups (education group, iron count group, education + iron count group, and control group), and the effects of health education were examined in terms of Hb levels and anemia incidence . To demonstrate the effect of health education using pictures and graphs, we estimated and calculated an effect size of 0.3571, α = 0.05, power = 0.95, and a dropout rate of 10%, with a total sample size of 138 cases and 46 cases each in the three groups (Gpower v.3.1).
Using confirmatory factor analysis, fit indices (comparative fit index, Tucker-Lewis index, and root mean square error of approximation) were calculated for 609 pregnant women to confirm the factor structure of the model of the Nepali version of the HLS scale and validate the goodness of fit. Furthermore, Cronbach’s Alpha coefficient was used to evaluate the reliability of the scale. Descriptive statistics were used to analyze the data and calculate frequencies and percentages to describe the characteristics of the study participants. Continuous variables are summarized using the mean and standard deviation (SD). Fisher’s exact test and the Chi-square test were used to analyze the associations of categorical variables among the three groups. Analysis of variance (ANOVA) was used to evaluate the primary outcome (Hb levels). To assess health literacy (secondary outcome), the Kruskal Wallis test was used to compare between the three groups on the health literacy scale, and the Wilcoxon signed rank sum test was used to calculate the change in total health literacy scale scores and subscale scores before and after the intervention. All p-values less than 0.05 were considered statistically significant. Statistical analysts analyzed data blinded to grouping. All outcomes were analyzed using all available data in an intention-to-treat framework. The data were statistically analyzed using IBM SPSS Statistics, version 28.0 (IBM Japan, Tokyo, Japan).
Out of the 609 pregnant women with pregnancies of gestational age 8–12 weeks who underwent prenatal health examinations during the study period and who consented to the study for anemia screening, 201 were diagnosed as anemic pregnant women. The following patients were excluded: 16 women who did not consent to the study, 1 woman with heart disease, 2 women with twins, 1 woman with severe anemia, and 25 pregnant women over 13 weeks of gestation. A total of 156 pregnant women participated in the study and were randomly assigned to one of the three groups, each with 52 pregnant women. Eighteen participants withdrew from the study follow-up due to miscarriage, premature birth, stillbirth, or transfer to another hospital, leaving a total of 138 participants in the final analysis: 49 in the education group, 44 in the distribution group, and 45 in the control group . The participants had a mean age of 24.4±4.5 years and a mean age at marriage of 20.2±3.2 years; 23 (16.7%) were in their teens at marriage, and 72 (52.2%) were in their 20s. The age at first childbirth was 21.1±3.2 years. One hundred and twelve (81.2%) participants were urban dwellers, 26 (18.8%) were rural dwellers, 127 (92.0%) were Hindu, 68 (49.3%) were extended family members, and 70 (50.7%) were immediate members. By ethnicity and caste, 65 (47.1%) were Brahman Chettri, a high caste; 40 (29.0%) were Janajati; 29 (21.0%) were Dalit; and 4 (2.9%) were Muslim, Madeshi, and Others. Furthermore, 77 (55.8%) had first pregnancies, 8 (18.1%) had a previous delivery 1–2 years ago, and 53 (38.4%) had a previous delivery more than 3 years ago; only one woman had a history of Worms (parasite infection). None of the participants smoked; one quit smoking during the pregnancy. Thirty-four (24.6%) women had family members who smoked, 112 (88.4%) were taking iron tablets, 134 (97.1%) of the participants and 130 (94.2%) of their husbands were literate, 3 (2.2%) of the participants and 5 (3.6%) of their husbands had no primary education, and 113 (81.9%) of the participants were housewives. ‘Migrant worker’ was the most common occupation status of the husbands of the participants (58 [42.0%]), and 97 (70.3%) of the households had a monthly household income of Rs 10,000–49,999 . No significant differences in pre-intervention Hb values were observed at baseline among the three groups (F(2,135), 0.218; P = 0.804). The Hb levels before and after the intervention were 10.20±0.62 g/dl and 11.58±0.65 g/dl in the education group (t(48) = 11.303; P<0.001), 10.19±0.69 g/dl and 11.15±1.08 g/dl in the material distribution group (t(43) = 5.286; P<0.001), and 10.27±0.51 g/dl and 11.11±1.20 g/dl in the control group (t(44) = 4.983; P<0.001), respectively. The results of the post-intervention three-group comparison showed a statistically significant difference (F(2,135), 3.253; P < 0.042) in mean Hb levels after the intervention. Dunnett’s test showed a statistically significant difference (P<0.044) between the education and control groups and no significant difference between the distribution and control groups (P = 0.972) . The results of the confirmatory factor analysis on the health literacy scale applied to the scale developer’s model showed reasonable goodness of fit (χ2 = 347.090(64), p < .0001) and Kaiser–Meyer–Olkin measure of sampling adequacy (KMO = 0.903) results. The factor loadings of all items were 0.614–0.962, and all Cronbach’s alpha coefficients for the subscales were above 0.970. No significant differences in health literacy (total scores and subscales) were observed among the three groups before the intervention in the Kruskal-Wallis test. Health literacy scores (total score and subscales) among the three groups after the intervention were not significantly different, although there was a trend toward improvement . In comparing changes in the health literacy scale (total and subscale scores) before and after the intervention, a statistically significant difference in total health literacy scores was observed for the total group and all three groups (P<0.001). The median (interquartile range) pre- and post-intervention total scores were 59.5 (53.8–70.0) and 100.50 (92.8–120.0) in the total group, 61.0 (54.52–69.75) and 108.50 (94.0–118.0) in the education group, 56.0 (51.25–70.0) and 96.0 (77.25–117.5) in the distribution group, and 58.50 (55.0–70.0) and 98.5 (96.0–120.0) in the control group, respectively. Only the education group showed statistically significant differences in functional literacy, communicative literacy, and critical literacy subscale scores. The median (interquartile range) pre- and post-intervention scores for functional literacy were 25.0 (19.25–25.0) and 25.0 (20.0–25.2) in the education group (P<0.012, a statistically significant difference), 22.5 (18.25–25.0) and 21.0 (18.25–25.0) in the distribution group (P = 0.892), and 20.50 (19.0–25.0), and 21.0 (19.0–25.0) in the control group (P = 0.127, a statistically insignificant difference), respectively. The median (interquartile range) communication literacy scores before and after the intervention were 20.5 (20.0–25.0) and 22.0 (20.0–25.0) in the education group (P<0.004, a statistically significant difference), 20.5 (20.0–25.0) and 20.5 (20.0–25.0) in the distribution group (P = 0.527), and 22.0 (20.0–25.0) and 21.5 (20.0–25.0) in the control group (P = 1.000), respectively. The median (interquartile range) critical literacy scores before and after the intervention were 17.0 (16.0–20.0) and 17.5 (16.5–20.0) in the education group (P<0.014, a statistically significant difference), 16.5 (16.0–20.0) and 16.5 (16.0–20.0) in the distribution group (P = 0.317), and 17.0 (16.0–20.0) and 16.5 (16.0–20.0) in the control group (P = 0.157), respectively.
Iron deficiency anemia among Nepalese women is a serious public health problem that has been addressed but has yet to be resolved. The mean age of first childbirth of the women in this study was as low as 21.1 ± 3.2 years and more than half were in their teens, suggesting that anemia had persisted since adolescence. The study showed that Hb levels and health literacy can be improved by creating text-free material and continuing face-to-face individualized health education from early pregnancy. The health education group showed significantly improved Hb levels compared with the handout and control groups. One art-based reproductive health intervention study in India used a health literacy improvement approach to strengthen women’s decision-making power regarding contraceptive knowledge, marital communication, family planning decision-making, and women’s reproductive rights through street theater and puppet shows. Lori et al. also conducted a study in Ghana aimed at improving health literacy to increase the ability to understand health messages and practice healthy behaviors. They used a “take action” strategy using demonstrations, role-plays, and puppet shows to help women understand and practice health messages. The results showed that health literacy improved in the intervention group compared with that in the group that received general maternity care. In addition, one prospective cohort study in Ghana examined the effects of storytelling, peer support, demonstrations, and teach-back on improving health literacy. Prenatal and postpartum women in Ghana who participated in the study showed an improved understanding of health messages and improved health literacy. Therefore, health literacy interventions targeting women in developing countries are likely to contribute to improved reproductive health. The literacy rate of the participants in this study was more than 90% for both couples, which is comparable with the literacy rate of youth aged 15–24 years in Nepal . Furthermore, 98.3% of the participants had at least primary-level education, indicating that the participants were capable of understanding health-related information. However, communicative literacy and critical literacy scale scores were low, suggesting that low health literacy may be related to one of the factors contributing to health issues remaining unresolved and becoming more complex despite the growing educational system among the younger generation in Nepal. Information is a necessary grounding material for personal decision-making. Autonomous decision-making, including obtaining reliable information, selecting the appropriate information for one’s own situation from a list of options, and making autonomous decisions, is the desired health behavior. However, Nepalese women’s autonomy in household decision-making is reported to be low in all aspects of managing their own health, making major household purchases, purchasing daily necessities, and visiting family and relatives . In other words, if health literacy can be strengthened with effective methodologies to empower Nepalese women to take interest in their own health, seek health information on their own, discern and select information, communicate it to others, and adapt it to their own lives, they will significantly contribute to maintaining and improving not only their health but also the health of their families. Interventions for the younger generation are particularly useful, and health behaviors are more likely to be sustained; thus, it is hoped that interventions to improve health literacy will become widespread as preconception care . If Nepalese women can improve their health, including improving nutrition from preconception, they can prevent perinatal complications, improve birth outcomes, and maintain and promote fetal health. Preconception care, which includes improving nutrition from adolescence, delaying the time of conception, and optimizing pregnancy spacing, is important in Nepal because majority of the pregnant women are teenage mothers or in their early 20s. Antenatal care (ANC) also ensures continuity in maternal and fetal health care, but no health education for individuals or groups of pregnant women is provided. Continuous health education in ANC is important to further improve maternal health in Nepal. This study found that art-based material and face-to-face tutoring improved the health literacy of Nepalese pregnant women and that improved health literacy contributed to improved anemia among pregnant women. Health literacy was found to be an important factor in improving the nutritional status of Nepalese pregnant women. Currently, the nutritional challenges of Nepalese pregnant women are becoming more complex. Comprehensive nutritional assessment using multiple indicators is needed to determine the extent and characteristics of nutritional disorders. This indicates an urgent need for nutrition assessment and for review of existing nutritional support to all generations in developing countries. This study focused on health literacy as well as literacy and an intervention. The study revealed the importance of health literacy as a determinant of the nutritional status of anemic Nepalese pregnant women and showed the effectiveness of the intervention. Health literacy was identified as an important factor in supporting nutrition improvement for Nepalese pregnant women even though literacy rates have increased. Furthermore, the study showed the importance of evidence-based knowledge dissemination using effective educational material. The material employed in this study was designed using pictures and diagrams that are easy for pregnant women and families with low health literacy to understand visually. Limitations In this study, only a self-reporting procedure was used to confirm iron medication behavior, which limited tracking. There is a need to implement a research procedure to count iron tablet sheets in the future to accurately confirm medication adherence. In addition, this was a single-center RCT study, and future long-term multicenter observational studies are desirable. Furthermore, to determine the cause of anemia, not only the Hb level but also a comprehensive evaluation of other parameters, including hematocrit, mean corpuscular volume, red blood cell, mean corpuscular Hb, mean corpuscular Hb concentration, red cell distribution width, and ferritin, is necessary.
In this study, only a self-reporting procedure was used to confirm iron medication behavior, which limited tracking. There is a need to implement a research procedure to count iron tablet sheets in the future to accurately confirm medication adherence. In addition, this was a single-center RCT study, and future long-term multicenter observational studies are desirable. Furthermore, to determine the cause of anemia, not only the Hb level but also a comprehensive evaluation of other parameters, including hematocrit, mean corpuscular volume, red blood cell, mean corpuscular Hb, mean corpuscular Hb concentration, red cell distribution width, and ferritin, is necessary.
The results of this study showed that individualized health education using pictures, photographs, and nomograms without text was effective in improving Hb levels among pregnant Nepalese women. A Nepali version of the HLS-14 was developed as a literacy measurement tool and found to be useful for evaluating health literacy levels before and after the intervention. The women who received education sessions showed significantly higher improvement in overall health, functional, communicative, and critical literacy scores than the distribution and control groups.
S1 Checklist CONSORT 2010 checklist of information to include when reporting a randomised trial*. (DOC) S1 File (DOCX) S2 File Inclusivity in global research. (DOCX)
|
Prediction of protein interactions with function in protein (de-)phosphorylation | 5494d86d-4261-4f51-9978-57984689bb32 | 11875375 | Biochemistry[mh] | Protein–protein interactions (PPIs) play crucial roles in fundamental processes in living cells . PPIs in cells form a complicated network which has been named “interactome” . By coordinating the activity of many proteins and protein complexes, the interactome performs many functions, including signal transduction, cell growth and differentiation, catalytic metabolic reactions, activation or suppression of a protein, transportation of molecules, etc . Studying PPIs can help to reveal the underlying molecular machinery in cells . Aberrant PPIs are associated with a wide range of human diseases, including cancer, infectious diseases and neurodegenerative diseases . Recent studies indicate that targeting and restoring dysregulated PPIs is a promising strategy for drug development for therapeutic intervention . Several studies support that complex networks, such as the interactome, are well suited to be modeled using hyperbolic geometry, a space whose mathematical properties naturally lead to the emergence of networks with scale invariance and strong clustering . The Popularity-Similarity (PS) model provides a geometric interpretation in hyperbolic space (H 2 ) and assumes that the clustering and hierarchy of complex networks arise from tradeoffs between popularity and similarity of nodes . Basically, in the PS model, the network nodes are situated within a circle at polar coordinates. The network nodes have a radial coordinate that represents their popularity or seniority, the angular coordinate reflects the similarity between nodes, and the hyperbolic distance between nodes abstracts an optimization process in which new nodes connect to nodes that are popular and similar. Alanis-Lobato et al. found that the embedding of the human Protein-Interaction Network (hPIN) in hyperbolic space has biological interpretations in terms of the PS model. The radial positioning of the nodes encapsulates information about the conservation and the evolution of proteins, corresponding to popularity and seniority, where nodes closed to the center of the circle represent proteins that evolved earlier and had more time to receive connections from newer proteins situated in the periphery of the circle. The angular positioning reflects the functional similarity between proteins and is driven by interactions in pathways and protein complexes, thus capturing the functional and spatial organization in the cell . This mapping can also lead to a better understanding of complex human disorders . Information on protein interactions can be obtained by a variety of experimental methods and this data is systematically stored in specialized databases . However, little is known about the function of many of these interactions, especially those obtained by high-throughput methods, like yeast-two-hybrid. Following our findings about the biological properties encapsulated in the mapping of the hPIN in hyperbolic space, here we explore if this mapping also contains information that would allow us to predict the function of PPIs. PPIs may result in the post-translational modification (PTM) of one of the interacting proteins. PTMs are considered as covalent or enzymatic modifications of a protein occurring after protein synthesis. They are classified into different groups such as the addition of functional groups/chemical groups (acetylation, methylation, phosphorylation), the addition of a polypeptide chain (ubiquitination, SUMOylation), the addition of other complex molecules (palmitoylation, glycosylation), and amino acid modifications (proteolytic cleavage) . In this work, we applied a machine learning method (random forest, RF) to predict whether PPIs result in PTMs using properties extracted from the mapping of the hPIN in hyperbolic space . To validate the potency of our algorithm, we predicted PTM-related protein interactions (PTM-PPIs) of ataxin-1, a protein implicated in Spinocerebellar ataxia type 1 (SCA1). SCA1 is a severe neurodegenerative disease caused by CAG-trinucleotide repeat expansions (> 39) in the ATXN1 gene. These mutations induce misfolding of polyQ-expanded ataxin-1, leading to its accumulation into toxic intranuclear inclusions in human neurons . The exact mechanism of protein aggregation remains unknown. However, recent evidence indicates that abnormal PTMs in ataxin-1, especially phosphorylation, significantly accelerate the aggregation process . Proteomics analysis in a cellular model of polyQ-expanded ataxin-1 aggregation enabled the construction of a perturbed hPIN . The SCA1 hPIN network contained 12 out of 32 predicted PTM-PPIs directly related to common upstream regulators. A compact cluster composed of ataxin-1, its dysregulated PTM-PPIs and their upstream regulators highly correlated to SCA1, suggesting that it might represent a crucial part of disease pathology.
Human protein interaction network construction The hPIN is a subset of release 2.3 of the Human Integrated Protein–Protein Interaction rEference (HIPPIE; (26,27)). HIPPIE retrieves interactions between human proteins from major expert-curated databases and calculates a score for each one, reflecting its combined experimental evidence. The raw version of this network is available in the Download section of the HIPPIE database. In this study, the hPIN was constructed using interactions with confidence score ≥ 0.71 (selects for a high percentage of interactions supported by at least two publications ). After discarding self-interactions and extracting the network’s largest connected component (LCC) we obtained an hPIN consisting of 15,587 proteins (nodes; ) with 186,196 interactions (edges; ). Mapping the human protein interactome in hyperbolic space We embedded the hPIN in the two-dimensional hyperbolic plane using the R package “NetHypGeom”, which implements the LaBNE + HM algorithm . This algorithm combines manifold learning and maximum likelihood estimation to model the geometry of complex networks . The PS model has a geometrical interpretation in hyperbolic space (H 2 ) where nodes that join the system connect with the existing ones that are hyperbolically closest to them . The network was embedded in H 2 to infer the hyperbolic coordinates of each protein, with parameters γ = 2.97, T = 0.83, and w = 2π. The 15,587 nodes of the hPIN lie within a hyperbolic disc where the radial coordinate of a node, r i , represents the popularity dimension with nodes that joined the system first being close to the disc’s center. The angular coordinate, θ i , represents the similarity dimension . Clustering in the similarity dimension To cluster proteins in the similarity dimension, we computed the difference between consecutive angular coordinates to identify big gaps. The nodes were sorted increasingly by their inferred angles θ , and the difference between θ i and θ i + 1 was computed to identify the largest gaps between protein clusters in the similarity dimension. Gap size (g = 0.0077) that produces sectors with a minimum of three components, was chosen. The same process was followed to subcluster the proteins of the first sector, with a minimum of five components in each subcluster and gap size, g = 0.0042. . To determine the start and the end of each cluster, we chose gap sizes g that produced clusters with a minimum number of members (3 and 5 respectively) because this allowed us to perform meaningful functional enrichment analysis of each group of proteins. We carried out Gene Ontology (GO) enrichment analysis for the proteins in each sector of the hPIN, using the nodes of the network as background set. Only GO Biological Process (BP) terms enriched at a significance level (p-value) of 0.05 or less were kept. Neighboring clusters with similar biological functions were merged to avoid redundancy. Selection of experimentally known phosphorylation and dephosphorylation PPIs The functional associations for the interactions within the hPIN were extracted from multiple providers using the PSIQUIC webservice . PSIQUIC enables access to molecular interaction databases supporting the PSI-MI format, which provides a hierarchical structure describing protein interactions. Specifically, we considered the interactions annotated with the children terms of the PSI-MI category 0414 “enzymatic reaction” and particularly we focused on two of them: PSI-MI category 0217 “phosphorylation reaction” and 0203 “dephosphorylation reaction” . The frequency of use of other terms (e.g., ubiquitination, methylation, acetylation) were too low for our purposes. From the interactions annotated as phosphorylation or dephosphorylation, we selected those for which we were able to determine the direction of PTM activity from an effector protein (protein kinase or protein phosphatase) to a target according to the annotations of the interacting proteins. To identify effector proteins we used KinaseMD and the human DEPhOsphorylation Database (DEPOD) . We discarded cases in which neither protein was identified as a putative effector, or both proteins were identified as one effector type (protein kinase or protein phosphatase), because it is not possible to identify the direction of the PTM-related interaction in these. We obtained a total of 295 PPIs as PTM-related directed interactions (from effector protein to target protein; training dataset; ). Two cases involved a protein kinase and a protein phosphatase mutually acting on each other. These 295 PPIs were used as positives and the rest were used as negatives to train our model. Feature extraction We used a total of 14 features to train a classifier to detect PTM-related directed PPIs. Given a directed PPI to test, one node is taken as effector and the other as target according to the direction being tested. Six properties are taken for effector and target: two are their hyperbolic coordinates (r and theta), and the other four are measures of centrality. In network analysis, centrality measures evaluate the importance of a node based on certain parameters . As measures of centrality, we used degree centrality (DC), betweenness centrality (BC), closeness centrality (CC) and eigenvector centrality (EC). DC is the number of immediate neighbours of a given node . BC computes the significance of a node by calculating the fraction of all shortest paths that pass through it . CC defines the proximity of a node to all the others and EC reflects the influence of a node in a network . The remaining two properties are defined for the edge: hyperbolic distance between the interacting proteins and r difference (absolute value). The values used are available in and . Model development and evaluation Model developing was done using the “caret” package in R . For the primary model building we used k-fold repeated cross validation in a training partition (70%) and validated the model in a leave out external validation sample (30%). We used the random forest (RF) algorithm from the “caret” package to train our model. Five-fold repeated cross-validation was used ( repeats = 10) to identify optimal hyperparameters. The parameter values were varied, and optimal values were chosen based on the accuracy ( mtry = 14, ntrees = 500). We report accuracy scores in the 5-fold repeated cross validation ( repeats = 10) samples. To address class imbalance, we used the under-sampling technique while sampling for cross validation. The importance of each feature was then calculated. This study implemented a ROC curve to determine the efficacy of the RF model. The receiver operator curve (ROC) represents the relationship between false positive rate and the true positive rate in a plane for each cut-off value used to define positive and negative classification results . We then calculated the area under the curve (AUC) value, which describes the classifier’s ability to discriminate between positive and negative results. It is a standard measurement of prediction quality and is commonly used to compare performance of models . Comparison of predictions by alternative methods We used kinase-substrate predictions by two alternative methods to add support to our predictions: PhosD and Phosformer-ST . We obtained a set of predictions from PhosD using a score threshold of 0.5 (1852 predictions). Of those, only 1062 overlapped with PPIs in our HIPPIE dataset. Phosformer-ST assigns scores to serine/threonine phosphorylation sites. To be able to compare this approach with ours, we re-assigned the predictions at a protein level, indicating as phosphorylated proteins that contain at least one peptide with a score above 0.5. Isoforms were removed from the comparison set, since the tools’ predictions are based on sequence fragments, and they can diverge among isoforms. This resulted in a set of 451,724 predictions. Of those, only 961 overlapped with PPIs in our HIPPIE dataset. Mass spectrometry (MS) analysis The generation of Tet-On YFP-ATXN1(Q82) mesenchymal stem cells (MSCs) has been previously described . Cells were cultured for 10 days in the presence or absence of doxycycline. For protein extraction and solubilization, technical triplicates of cells were vigorously shaken at 95 °C with hot SDT buffer and centrifuged at high speed. Protein solutions were loaded into a polyacrylamide gel and stained by Coomassie Brilliant Blue G-250 (CBB-G250) for sample quality control. A cut-off filter of 10 kDa was used for FASP sample processing, which includes protein reduction by dithiothreitol and alkylation by iodoacetamide to prevent disulfide bond formation following incubation of the samples in presence of trypsin at 37 °C for 18 hours. Extraction with ethyl acetate solvent was used for the removal of any potential SDS traces from the resulting peptide mixture. Liquid Chromatography with tandem mass spectrometry (LC-MS/MS) analysis of peptide mixture was performed using the Ultimate 3000 RSLCnano system (Thermo Fisher Scientific) followed by the Orbitrap Q-Exactive HF X system (Thermo Fisher Scientific). The analytical column outlet was linked to the Digital PicoView 550 (New Objective) ion source, coupled with the Active Background Ion Reduction Device (ABIRD, ESI Source Solutions). MS data were acquired in a data-dependent strategy selecting up to the top 20 precursors. MS data processing Raw data obtained from MS were processed on MaxQuant (version 1.6.3.3) with Andromeda search engine utilization. Peptide sequences were annotated on the UniProtKB database (version 20180912, Human) and MaxQuant contamination databases (downloaded with the given version). Mass tolerances for peptides and MS/MS fragments were 4.5–10 ppm and 0.05 Da, respectively. Oxidation of methionine, deamidation (N, Q) and N-terminal acetylation were set as optional protein modifications, while carbamidomethylation (C) was set as fixed protein modification. Two enzyme miss-cleavages were permitted for the final annotation. Peptides and proteins with false discovery rate (FDR; q-value) < 1% were considered. The MaxQuant label-free quantification algorithm (MaxLFQ) was applied for global data normalization (minimal ratio count 1) and the MaxQuant protein group list was further analyzed via KNIME Analytics Platform (v.3.7.1). Results were deposited in the PRIDE Archive ( https://www.ebi.ac.uk/pride/archive , accession number: PXD038393). Construction of SCA1 PPI networks and enrichment analysis Proteins were annotated in the HIPPIE database for the retrieval of high-confidence protein-protein interaction (PPI) scores ( ≥ 0.71). A PPI network was constructed in Cytoscape and proteins were further clustered in functional communities using the GLay algorithm . The layout of the network was designed in the Gephi platform . For the prediction of upstream regulatory kinases, proteins were annotated in the X2K Appyters platform using the KEA3 database . Predicted kinases were filtered with an overall score lower than 75. Over-representation analyses for KEGG biological pathways, human diseases (Jensen diseases), proteomics signatures (ProteomicsDB) and cell types and tissues (Descartes) were performed using the EnrichR package . Drug repurposing and repositioning Protein lists were uploaded in the L1000FWD platform according to their pattern of dysregulation and hits were sorted based on their combined score. Mechanism of action and drug target identification were studied on the PHAROS, DrugBank and Reactome platforms. Evidence for drug safety and usage was obtained from the International Clinical Trials Registry Platform.
The hPIN is a subset of release 2.3 of the Human Integrated Protein–Protein Interaction rEference (HIPPIE; (26,27)). HIPPIE retrieves interactions between human proteins from major expert-curated databases and calculates a score for each one, reflecting its combined experimental evidence. The raw version of this network is available in the Download section of the HIPPIE database. In this study, the hPIN was constructed using interactions with confidence score ≥ 0.71 (selects for a high percentage of interactions supported by at least two publications ). After discarding self-interactions and extracting the network’s largest connected component (LCC) we obtained an hPIN consisting of 15,587 proteins (nodes; ) with 186,196 interactions (edges; ).
We embedded the hPIN in the two-dimensional hyperbolic plane using the R package “NetHypGeom”, which implements the LaBNE + HM algorithm . This algorithm combines manifold learning and maximum likelihood estimation to model the geometry of complex networks . The PS model has a geometrical interpretation in hyperbolic space (H 2 ) where nodes that join the system connect with the existing ones that are hyperbolically closest to them . The network was embedded in H 2 to infer the hyperbolic coordinates of each protein, with parameters γ = 2.97, T = 0.83, and w = 2π. The 15,587 nodes of the hPIN lie within a hyperbolic disc where the radial coordinate of a node, r i , represents the popularity dimension with nodes that joined the system first being close to the disc’s center. The angular coordinate, θ i , represents the similarity dimension .
To cluster proteins in the similarity dimension, we computed the difference between consecutive angular coordinates to identify big gaps. The nodes were sorted increasingly by their inferred angles θ , and the difference between θ i and θ i + 1 was computed to identify the largest gaps between protein clusters in the similarity dimension. Gap size (g = 0.0077) that produces sectors with a minimum of three components, was chosen. The same process was followed to subcluster the proteins of the first sector, with a minimum of five components in each subcluster and gap size, g = 0.0042. . To determine the start and the end of each cluster, we chose gap sizes g that produced clusters with a minimum number of members (3 and 5 respectively) because this allowed us to perform meaningful functional enrichment analysis of each group of proteins. We carried out Gene Ontology (GO) enrichment analysis for the proteins in each sector of the hPIN, using the nodes of the network as background set. Only GO Biological Process (BP) terms enriched at a significance level (p-value) of 0.05 or less were kept. Neighboring clusters with similar biological functions were merged to avoid redundancy.
The functional associations for the interactions within the hPIN were extracted from multiple providers using the PSIQUIC webservice . PSIQUIC enables access to molecular interaction databases supporting the PSI-MI format, which provides a hierarchical structure describing protein interactions. Specifically, we considered the interactions annotated with the children terms of the PSI-MI category 0414 “enzymatic reaction” and particularly we focused on two of them: PSI-MI category 0217 “phosphorylation reaction” and 0203 “dephosphorylation reaction” . The frequency of use of other terms (e.g., ubiquitination, methylation, acetylation) were too low for our purposes. From the interactions annotated as phosphorylation or dephosphorylation, we selected those for which we were able to determine the direction of PTM activity from an effector protein (protein kinase or protein phosphatase) to a target according to the annotations of the interacting proteins. To identify effector proteins we used KinaseMD and the human DEPhOsphorylation Database (DEPOD) . We discarded cases in which neither protein was identified as a putative effector, or both proteins were identified as one effector type (protein kinase or protein phosphatase), because it is not possible to identify the direction of the PTM-related interaction in these. We obtained a total of 295 PPIs as PTM-related directed interactions (from effector protein to target protein; training dataset; ). Two cases involved a protein kinase and a protein phosphatase mutually acting on each other. These 295 PPIs were used as positives and the rest were used as negatives to train our model.
We used a total of 14 features to train a classifier to detect PTM-related directed PPIs. Given a directed PPI to test, one node is taken as effector and the other as target according to the direction being tested. Six properties are taken for effector and target: two are their hyperbolic coordinates (r and theta), and the other four are measures of centrality. In network analysis, centrality measures evaluate the importance of a node based on certain parameters . As measures of centrality, we used degree centrality (DC), betweenness centrality (BC), closeness centrality (CC) and eigenvector centrality (EC). DC is the number of immediate neighbours of a given node . BC computes the significance of a node by calculating the fraction of all shortest paths that pass through it . CC defines the proximity of a node to all the others and EC reflects the influence of a node in a network . The remaining two properties are defined for the edge: hyperbolic distance between the interacting proteins and r difference (absolute value). The values used are available in and .
Model developing was done using the “caret” package in R . For the primary model building we used k-fold repeated cross validation in a training partition (70%) and validated the model in a leave out external validation sample (30%). We used the random forest (RF) algorithm from the “caret” package to train our model. Five-fold repeated cross-validation was used ( repeats = 10) to identify optimal hyperparameters. The parameter values were varied, and optimal values were chosen based on the accuracy ( mtry = 14, ntrees = 500). We report accuracy scores in the 5-fold repeated cross validation ( repeats = 10) samples. To address class imbalance, we used the under-sampling technique while sampling for cross validation. The importance of each feature was then calculated. This study implemented a ROC curve to determine the efficacy of the RF model. The receiver operator curve (ROC) represents the relationship between false positive rate and the true positive rate in a plane for each cut-off value used to define positive and negative classification results . We then calculated the area under the curve (AUC) value, which describes the classifier’s ability to discriminate between positive and negative results. It is a standard measurement of prediction quality and is commonly used to compare performance of models .
We used kinase-substrate predictions by two alternative methods to add support to our predictions: PhosD and Phosformer-ST . We obtained a set of predictions from PhosD using a score threshold of 0.5 (1852 predictions). Of those, only 1062 overlapped with PPIs in our HIPPIE dataset. Phosformer-ST assigns scores to serine/threonine phosphorylation sites. To be able to compare this approach with ours, we re-assigned the predictions at a protein level, indicating as phosphorylated proteins that contain at least one peptide with a score above 0.5. Isoforms were removed from the comparison set, since the tools’ predictions are based on sequence fragments, and they can diverge among isoforms. This resulted in a set of 451,724 predictions. Of those, only 961 overlapped with PPIs in our HIPPIE dataset.
The generation of Tet-On YFP-ATXN1(Q82) mesenchymal stem cells (MSCs) has been previously described . Cells were cultured for 10 days in the presence or absence of doxycycline. For protein extraction and solubilization, technical triplicates of cells were vigorously shaken at 95 °C with hot SDT buffer and centrifuged at high speed. Protein solutions were loaded into a polyacrylamide gel and stained by Coomassie Brilliant Blue G-250 (CBB-G250) for sample quality control. A cut-off filter of 10 kDa was used for FASP sample processing, which includes protein reduction by dithiothreitol and alkylation by iodoacetamide to prevent disulfide bond formation following incubation of the samples in presence of trypsin at 37 °C for 18 hours. Extraction with ethyl acetate solvent was used for the removal of any potential SDS traces from the resulting peptide mixture. Liquid Chromatography with tandem mass spectrometry (LC-MS/MS) analysis of peptide mixture was performed using the Ultimate 3000 RSLCnano system (Thermo Fisher Scientific) followed by the Orbitrap Q-Exactive HF X system (Thermo Fisher Scientific). The analytical column outlet was linked to the Digital PicoView 550 (New Objective) ion source, coupled with the Active Background Ion Reduction Device (ABIRD, ESI Source Solutions). MS data were acquired in a data-dependent strategy selecting up to the top 20 precursors.
Raw data obtained from MS were processed on MaxQuant (version 1.6.3.3) with Andromeda search engine utilization. Peptide sequences were annotated on the UniProtKB database (version 20180912, Human) and MaxQuant contamination databases (downloaded with the given version). Mass tolerances for peptides and MS/MS fragments were 4.5–10 ppm and 0.05 Da, respectively. Oxidation of methionine, deamidation (N, Q) and N-terminal acetylation were set as optional protein modifications, while carbamidomethylation (C) was set as fixed protein modification. Two enzyme miss-cleavages were permitted for the final annotation. Peptides and proteins with false discovery rate (FDR; q-value) < 1% were considered. The MaxQuant label-free quantification algorithm (MaxLFQ) was applied for global data normalization (minimal ratio count 1) and the MaxQuant protein group list was further analyzed via KNIME Analytics Platform (v.3.7.1). Results were deposited in the PRIDE Archive ( https://www.ebi.ac.uk/pride/archive , accession number: PXD038393).
Proteins were annotated in the HIPPIE database for the retrieval of high-confidence protein-protein interaction (PPI) scores ( ≥ 0.71). A PPI network was constructed in Cytoscape and proteins were further clustered in functional communities using the GLay algorithm . The layout of the network was designed in the Gephi platform . For the prediction of upstream regulatory kinases, proteins were annotated in the X2K Appyters platform using the KEA3 database . Predicted kinases were filtered with an overall score lower than 75. Over-representation analyses for KEGG biological pathways, human diseases (Jensen diseases), proteomics signatures (ProteomicsDB) and cell types and tissues (Descartes) were performed using the EnrichR package .
Protein lists were uploaded in the L1000FWD platform according to their pattern of dysregulation and hits were sorted based on their combined score. Mechanism of action and drug target identification were studied on the PHAROS, DrugBank and Reactome platforms. Evidence for drug safety and usage was obtained from the International Clinical Trials Registry Platform.
Prediction of phosphorylation and dephosphorylation directed PPIs To predict PTM-related directed PPIs (PTM-PPIs) we considered the entire dataset of human PPIs mapped in hyperbolic space (hPIN; see Methods for details). In this space, the angular coordinates (theta) represent the similarity between the nodes in terms of interacting partners, and shorter distance to the center (r) corresponds to nodes with higher connectivity. The angular coordinate of the nodes in the hyperbolic plane reflects characteristics that make a node similar to the others. From a biological point of view, proteins agglomerate in the angular dimension of the H 2 capturing functional organization . To investigate the biological meaning of the theta coordinates, we find proteins grouped in clusters by identifying gaps between consecutive inferred angles ( , see Methods for details). This resulted in 24 clusters in the hPIN. The proteins are grouped in a similarity-based manner as each cluster is found to be enriched with various aspects of the GO biological process . To predict PPIs as directed PTM-PPIs we collected a dataset of 295 experimentally supported interactions involving protein phosphorylation or dephosphorylation. We selected PPIs for which one of the interacting proteins is a putative effector (kinase or phosphatase) while the other is not (see Methods for details). We assume that this gives us a good estimate of the directionality of the interaction. Interestingly, the distribution of nodes corresponding to effectors and targets is different from that of the background proteome in the hPIN . For example, effectors and targets appear to be depleted (less frequent than background) in clusters 1.13 and 1.14 associated with mitochondrial functions, while targets are enriched in cluster 1.1 associated with mRNA processing and effectors in cluster 1.7 associated with protein ubiquitination . These results suggest that the hPIN provides information that could be used to discriminate effectors and targets of protein phosphorylation or dephosphorylation. We chose 14 features to train a random forest (RF) model, six of them assigned to each of the two interacting nodes (r and theta coordinates in the hyperbolic map and four measurements of centrality), and two regarding the edge (hyperbolic distance and radial difference between the nodes) (See Methods for details). Regarding these features, we could appreciate significant differences in their distributions for effectors, targets and background . Regarding hyperbolic coordinates, the effectors of the 295 positive directed-PPIs had in general shorter radius than the background proteins. This was also the case for targets, although with a less pronounced difference. This indicates that targets and effectors are more interconnected than other proteins, which agrees with their contribution to signaling pathways, stronger for the effectors. Regarding the angular dimension, both effectors and targets have maxima at a position different to the background, with effectors having a more pronounced grouping around theta = 1.8, which corresponds to cluster 1.7 (see also circular plot in ). Regarding centrality measures, targets and effectors have higher values than background. For the distributions with maxima at zero values of EC, BC and DC, targets seem to have a larger number of low values than effectors. For the Gaussian distribution of CC, again effectors have slightly higher values than targets. Together with the observations of shorter radius this is in accordance to the higher connectivity of effectors and targets, with effectors slightly more connected and central than targets. Regarding the edges, the radius differences between the connected nodes are higher in PTM-PPIs than background. This suggests that these interactions have a greater capacity to connect highly and lowly connected regions of the hPIN. The hyperbolic distances between nodes are slightly lower than for background PPIs. This could reflect that these PTM-PPIs participate in closely connected pathways and signaling networks. To train the RF, we used 70% of the 295 interactions with 5-fold cross-validation, while the remaining 30% were used as the validation set. We performed the cross validation independently 10 times. The model with the highest accuracy was chosen as the final prediction model and it was validated on the test set, showing an accuracy of 74% ( ; see Methods for details). A random forest (RF) classifier with 500 trees was able to produce satisfactory results. Finally, 74% sensitivity and 80% specificity were calculated from the confusion matrix. The receiver operating characteristic curve (ROC) had an AUC value of 0.87, indicating that the classifier could effectively find directed PTM-related PPIs based on topological and network properties of the interacting partners in the hPIN . Additionally, the Precision-Recall curve was computed to assess the model’s performance in the context of the imbalanced dataset, providing further insights into the classifier’s ability to correctly identify the minority class (directed PTM-related PPIs). Regarding the contribution of the features to the predictions, we observed that the angular coordinate of the target is the most important feature, closely followed by the angular coordinate of the effector . The high relevance of the angular coordinates of effectors and targets is related to the fact that the angular positioning of the hyperbolic mapping captures the functional organization of the proteins in the cell. EC of effector and target (which represents the importance of a node based on the links to important nodes) were the next features in order of importance. These were followed by the hyperbolic distance between the pairs of interacting proteins. The next feature in order of importance was the CC of the target (which seemed to be much more important than that of the effector). This feature measures the central position of the node with respect to the entire network. The next features were the BC of the effector and of the target (representing how often a node is on paths between other nodes). We then have the r difference, the CC of the effector and, with much less importance, the r of the effector and of the target, suggesting that the distance to the center (which reflects the evolutionary age of the protein) is not very informative. Finally, the least important features are the DC values of the target and of the effector, which represent how well a node is directly connected to most nodes in the network. It is interesting that the values of theta, and more marginally, the hyperbolic distance and the r distance between the nodes of PPIs were relevant features for the prediction. These results indicate that the embedding of the hPIN in hyperbolic space, which assigns these r and theta values to each node, can be useful to identify PTM-directed protein interactions. In particular, the contrast of the distributions of theta values of effectors and targets in the training dataset with the functions enriched in the corresponding clusters is revealing (see ). While the maximum accumulation of effectors happens in a region of cluster 1.7 associated with the GO term “ protein ubiquitination ”, a wider maximum happens for effectors and targets around clusters 1.1–1.3 enriched with terms “ mRNA processing ”, “ regulation of gene silencing by miRNA ” and “ regulation of nucleobase-containing compound metabolic process ”. The latest includes the synthesis of DNA and RNA. The distribution of targets is more similar to the background than that of the effectors, suggesting that PTM regulation targets all cell processes. Differently, effectors have a tendency to occupy tighter angular regions of the map, suggesting an association with regulatory mechanisms of control. The association with protein degradation (ubiquitination) seems to be a salient feature, which agrees with mechanisms known to stop active kinases . These distributions of theta values have biological significance, which explains why they had the best predictive value. Regarding the centrality measures, which are independent of the hyperbolic mapping, it can be seen that while EC, particularly of the effectors, plays an important role, DC does not seem to contribute so much. In any case, all features receive non-null values, suggesting that they all have predictive value. To get a better understanding of how important are the features for the predictions of (de-) phosphorylation directed PPIs, we trained five RF models masking different features each time and we evaluated the overall contribution of the attributes in terms of information gain. More specifically, we built the model on the 14 features and we determined the significance of each variable in the predictions using the varImp function from the random forest classifier. This method tracks the changes in model statistics for each predictor and accumulates the reduction in the statistic when each predictor’s feature is added to the model. This total reduction is used as the variable importance measure. We conducted the prediction process while masking different features and we observed the changes in the performance metrics of the model. We created five datasets, starting with 14 variables and then we removed them one by one in order of their importance . ROC curves analysis of the different datasets showed that using 14 features, AUC has the higher value; while removing them, the AUC is reducing . This finding indicates the importance of the hyperbolic properties and centrality measures in classifying the directed (de-) phosphorylation related PPIs. The prediction model was applied to the entire set of edges. As we predict directional PTM-PPIs (with an effector and a target) all edges were tested in both directions for a total of n = 2 x 186,198 = 372,396 evaluations. The model produces a probability of the interaction being a directed PTM-PPI or not. A total of 117,655 directed interactions received a score>= 0.5 and 6,790 a score>= 0.9 . The table of predictions allows easily to find the best predictions as target or effector for every protein in the network. For example, MAPK3 (MAP kinase-activated protein kinase 3; UniProt ID MAPK3_HUMAN) has a total of 9 edges; none of them were part of the experimentally verified set used in the training. Regardless, the predictions make sense: 8/9 have a score>= 0.5 for MAPK3 as effector (the best one is for HSPB1 as a target; score = 0.942), and there is only one with a score>= 0.5 for MAPK3 as target (with PRKY; PRKY_HUMAN), which is precisely the one with score < 0.5 for MAPK3 as effector. PRKY is a putative serine/threonine protein kinase with very little experimental information and its prediction as effector over MAPK3 is modest (score = 0.59), but the fact that it is annotated as protein kinase makes the prediction plausible. We see that the classifier has trouble assigning the correct direction of the interaction. For example, the edge CHK2 (CHK2_HUMAN) RB (retinoblastoma; RB_HUMAN), which was used as positive for training, is highly evaluated with RB as target (score = 0.968), but also with RB as effector (score = 0.78). This indicates that the predictions need to be taken with care but also suggests that the scores can be compared. To evaluate whether our predictions collectively are meaningful from a biological point of view, we performed a Gene Ontology (GO) enrichment analysis of proteins predicted even once as effectors (n = 12,115, ). Even considering that this is a large number of proteins, regarding GO Biological Process terms, these proteins are enriched in terms such as “ubiquitin-dependent protein catalytic process” , “protein ubiquitination ”, “protein phosphorylation” , and “ cellular protein modifications ”. Additionally, GO Molecular Function terms like “protein serine/threonine kinase activity” , “protein kinase binding” and “kinase binding” are also enriched. For comparison, we computed the enrichment for proteins predicted even once as not being effectors (n = 13,784; the two lists overlap in 10,314 proteins) and most of these terms received less significant p-values. This functional analysis supports the good performance of our prediction model. Support of the predictions by other methods To add support to the predictions of our model, we verified which of our predictions were detected by two alternative approaches: PhosD and Phosformer-ST (see Methods for details). PhosD is a kinase-substate prediction tool based on protein domains. Phosformer-ST is a machine learning tool that uses serine/threonine phosphorylation sites comprised of 15-mer peptides, assigning scores for these regions. A total of 788 and 535 predictions were supported by PhosD and Phosformer-ST, respectively, with an overlap of the three methods for 24 predictions. The detailed prediction results can be found in . Proteomics analysis highlights dysregulated biological pathways in a SCA1 cellular model The results presented above suggest that the hyperbolic mapping of the hPIN provides a predictive value for direct PTM-PPIs. However, the interpretation of individual prediction scores remains complex. Therefore, we hypothesized that these predictive insights can collectively help us understand perturbations of the hPIN, which could be particularly valuable in identifying therapeutic mechanisms for human diseases. This is especially relevant for complex neurodegenerative diseases, in which the normal interactome is reportedly disrupted and abnormal PTMs can promote a plethora of pathological events. To test this hypothesis, we analyzed proteomics data from a SCA1 cellular model, in which the hPIN is perturbed due to the accumulation of inclusions of polyQ-expanded ataxin-1. In particular, proteome alterations driven by the accumulation of mutant ataxin-1 were studied in Tet-On YFP-ATXN1(Q82) MSCs, a previously characterized cellular model of protein aggregation . These cells contain insoluble intranuclear inclusions of polyQ-expanded ataxin-1 with a β-sheet conformation, an event that characterizes late-stage SCA1. Global proteome profiling was performed in inclusions-containing cells (SCA1, n = 3) and control cells (CTL, n = 3; see Methods for details). As a result, 3,926 proteins were identified and 3,179 of them were quantified in all six samples. The two conditions were efficiently discriminated by principal component analysis, using as a criterion the variance in protein representation in each group . To create a representative protein network for SCA1 cellular pathology, we filtered 805 dysregulated proteins by a | log2 FC | ≥ 0.5 and adj. p-value ≤ 0.05 and retrieved their high confidence interaction scores using the HIPPIE platform . As a result, a complex PPI network of 636 significantly dysregulated proteins was generated, representing proteome alterations due to the accumulation of polyQ-expanded ataxin-1 inclusions . The PPI network was further divided into smaller communities of densely interacting proteins, which outline functional modules (see Methods for details). Ataxin-1 was detected in the largest community (C1), which was highly associated with neurodegeneration and neuronal-related terms . Enrichment analysis for biological pathways on the next 4 largest communities revealed a significant implication of spliceosome, lysosome and ribosome, as well as metabolic pathways (C2-C5, respectively) . Interestingly, clustering and analysis of a control PPI network generated from a randomly selected protein dataset (n = 805) did not result in similar enrichment terms, indicating that the SCA1 PPI network and sub-communities are not generic but strongly associate with polyQ aggregation. PTM-PPIs of ataxin-1 are components of the SCA1 network To date, the effect of PTM-PPIs on the aggregation of mutant ataxin-1 remains unknown. Therefore, we sought to identify potential PTM-PPIs of ataxin-1 which may be involved in polyQ protein aggregation and eventually SCA1 pathogenesis. In the SCA1 PPI network, ataxin-1 directly interacted with 21 proteins. Implementation of our algorithm suggested that 13 of them may have a post-translational modification activity. Specifically, four of these proteins (gene names: ANP32A , EIF3F , GSPT1 and USP7 ) were downregulated, while nine of them (gene names: DNAJB6 , HSPB1 , PHPT1 , SNCA , SQSTM1 , SUMO1 , TBL1XR1 , TRIP6 and TPM3 ) were upregulated in SCA1 cells . The enzymatic activity of phosphohistidine phosphatase 1 ( PHPT1 ), small ubiquitin-related modifier 1 ( SUMO1 ), Ubiquitin-specific-processing protease 7 ( USP7 ) and the chaperone proteins ( SNCA , HSPB1 and DNAJB6) have been previously described, while no such information exists for the rest (7/13) of the predicted proteins . PTM-PPIs of ataxin-1 mainly clustered in community C1 (associated with neurodegeneration, eight proteins) and to a lesser extent in C2 (associated with spliceosome; four proteins) and C4 (associated with lysosome; one protein) . In an attempt to find a link among the three major communities containing the PTM-PPIs of ataxin-1 (C1, C2 and C4), we searched for potential common upstream regulators. To do so, the proteins of each cluster were considered substrates and were annotated using the KEA3 database for the prediction of regulatory kinases (see Methods for details). According to the results, 21 kinases were identified as potential common regulators for all three communities. Interestingly, three of them ( MAPK1 , MAPK3 and CDK4 ) were indeed significantly dysregulated in SCA1 cells , suggesting their potential impact on regulating the C1, C2 and C4 communities. These kinases interact with various components of clusters C1 and C2 but not with proteins of C4, while none of them directly interacts with ataxin-1 . Remarkably, no significantly dysregulated kinases were identified when repeating the analysis for a randomly sampled test PPI network, underscoring the specificity of the identified kinases to the SCA1-related network. Identification and restoration of critical components of the SCA1 PPI network SCA1-related cellular pathology might be driven by a few specific components scattered within the disease PPI network. To address this hypothesis, we generated a sub-network consisting of ataxin-1, its predicted PTM-PPIs (n = 13) and their three common upstream kinases (MAPK1, MAPK3 and CDK4) ( ; see Methods for details). Interestingly, all three identified kinases were connected to ataxin-1 through α-synuclein ( SNCA ), a protein associated with several neurodegenerative diseases and particularly Parkinson’s disease . Enrichment analysis for rare diseases (see Methods for details) indicated that this sub-network is associated with cerebellar degeneration terms, including Spinocerebellar Ataxia and the formation of nuclear inclusion bodies. This result suggests that these proteins are critical for the disease and their dysregulation may underlie SCA1-related pathological events . Therefore, restoration of their dysregulation pattern might mitigate disease progression. To this end, we searched for candidate drugs potent to increase the levels of downregulated proteins and decrease those of upregulated ones. Their reverse score indicated the overlap between the input proteins and the altered signature after drug administration. Hits with at least a 25% reverse score were considered significant candidates (see Methods for details). Then, they were sorted by descending combined score, considering the reverse score, p-value and Z-score. From this analysis, we identified four known drugs (artesunate, linifanib, budesonide and betamethasone) and three novel compounds (BRD-K54687541, BRD-K71265179 and BRD-A08662020) as potential treatment approaches . These agents might mitigate polyQ-expanded ataxin-1-associated neuropathology in SCA1 cells potentially leading to the development of novel therapeutic strategies against the disease.
To predict PTM-related directed PPIs (PTM-PPIs) we considered the entire dataset of human PPIs mapped in hyperbolic space (hPIN; see Methods for details). In this space, the angular coordinates (theta) represent the similarity between the nodes in terms of interacting partners, and shorter distance to the center (r) corresponds to nodes with higher connectivity. The angular coordinate of the nodes in the hyperbolic plane reflects characteristics that make a node similar to the others. From a biological point of view, proteins agglomerate in the angular dimension of the H 2 capturing functional organization . To investigate the biological meaning of the theta coordinates, we find proteins grouped in clusters by identifying gaps between consecutive inferred angles ( , see Methods for details). This resulted in 24 clusters in the hPIN. The proteins are grouped in a similarity-based manner as each cluster is found to be enriched with various aspects of the GO biological process . To predict PPIs as directed PTM-PPIs we collected a dataset of 295 experimentally supported interactions involving protein phosphorylation or dephosphorylation. We selected PPIs for which one of the interacting proteins is a putative effector (kinase or phosphatase) while the other is not (see Methods for details). We assume that this gives us a good estimate of the directionality of the interaction. Interestingly, the distribution of nodes corresponding to effectors and targets is different from that of the background proteome in the hPIN . For example, effectors and targets appear to be depleted (less frequent than background) in clusters 1.13 and 1.14 associated with mitochondrial functions, while targets are enriched in cluster 1.1 associated with mRNA processing and effectors in cluster 1.7 associated with protein ubiquitination . These results suggest that the hPIN provides information that could be used to discriminate effectors and targets of protein phosphorylation or dephosphorylation. We chose 14 features to train a random forest (RF) model, six of them assigned to each of the two interacting nodes (r and theta coordinates in the hyperbolic map and four measurements of centrality), and two regarding the edge (hyperbolic distance and radial difference between the nodes) (See Methods for details). Regarding these features, we could appreciate significant differences in their distributions for effectors, targets and background . Regarding hyperbolic coordinates, the effectors of the 295 positive directed-PPIs had in general shorter radius than the background proteins. This was also the case for targets, although with a less pronounced difference. This indicates that targets and effectors are more interconnected than other proteins, which agrees with their contribution to signaling pathways, stronger for the effectors. Regarding the angular dimension, both effectors and targets have maxima at a position different to the background, with effectors having a more pronounced grouping around theta = 1.8, which corresponds to cluster 1.7 (see also circular plot in ). Regarding centrality measures, targets and effectors have higher values than background. For the distributions with maxima at zero values of EC, BC and DC, targets seem to have a larger number of low values than effectors. For the Gaussian distribution of CC, again effectors have slightly higher values than targets. Together with the observations of shorter radius this is in accordance to the higher connectivity of effectors and targets, with effectors slightly more connected and central than targets. Regarding the edges, the radius differences between the connected nodes are higher in PTM-PPIs than background. This suggests that these interactions have a greater capacity to connect highly and lowly connected regions of the hPIN. The hyperbolic distances between nodes are slightly lower than for background PPIs. This could reflect that these PTM-PPIs participate in closely connected pathways and signaling networks. To train the RF, we used 70% of the 295 interactions with 5-fold cross-validation, while the remaining 30% were used as the validation set. We performed the cross validation independently 10 times. The model with the highest accuracy was chosen as the final prediction model and it was validated on the test set, showing an accuracy of 74% ( ; see Methods for details). A random forest (RF) classifier with 500 trees was able to produce satisfactory results. Finally, 74% sensitivity and 80% specificity were calculated from the confusion matrix. The receiver operating characteristic curve (ROC) had an AUC value of 0.87, indicating that the classifier could effectively find directed PTM-related PPIs based on topological and network properties of the interacting partners in the hPIN . Additionally, the Precision-Recall curve was computed to assess the model’s performance in the context of the imbalanced dataset, providing further insights into the classifier’s ability to correctly identify the minority class (directed PTM-related PPIs). Regarding the contribution of the features to the predictions, we observed that the angular coordinate of the target is the most important feature, closely followed by the angular coordinate of the effector . The high relevance of the angular coordinates of effectors and targets is related to the fact that the angular positioning of the hyperbolic mapping captures the functional organization of the proteins in the cell. EC of effector and target (which represents the importance of a node based on the links to important nodes) were the next features in order of importance. These were followed by the hyperbolic distance between the pairs of interacting proteins. The next feature in order of importance was the CC of the target (which seemed to be much more important than that of the effector). This feature measures the central position of the node with respect to the entire network. The next features were the BC of the effector and of the target (representing how often a node is on paths between other nodes). We then have the r difference, the CC of the effector and, with much less importance, the r of the effector and of the target, suggesting that the distance to the center (which reflects the evolutionary age of the protein) is not very informative. Finally, the least important features are the DC values of the target and of the effector, which represent how well a node is directly connected to most nodes in the network. It is interesting that the values of theta, and more marginally, the hyperbolic distance and the r distance between the nodes of PPIs were relevant features for the prediction. These results indicate that the embedding of the hPIN in hyperbolic space, which assigns these r and theta values to each node, can be useful to identify PTM-directed protein interactions. In particular, the contrast of the distributions of theta values of effectors and targets in the training dataset with the functions enriched in the corresponding clusters is revealing (see ). While the maximum accumulation of effectors happens in a region of cluster 1.7 associated with the GO term “ protein ubiquitination ”, a wider maximum happens for effectors and targets around clusters 1.1–1.3 enriched with terms “ mRNA processing ”, “ regulation of gene silencing by miRNA ” and “ regulation of nucleobase-containing compound metabolic process ”. The latest includes the synthesis of DNA and RNA. The distribution of targets is more similar to the background than that of the effectors, suggesting that PTM regulation targets all cell processes. Differently, effectors have a tendency to occupy tighter angular regions of the map, suggesting an association with regulatory mechanisms of control. The association with protein degradation (ubiquitination) seems to be a salient feature, which agrees with mechanisms known to stop active kinases . These distributions of theta values have biological significance, which explains why they had the best predictive value. Regarding the centrality measures, which are independent of the hyperbolic mapping, it can be seen that while EC, particularly of the effectors, plays an important role, DC does not seem to contribute so much. In any case, all features receive non-null values, suggesting that they all have predictive value. To get a better understanding of how important are the features for the predictions of (de-) phosphorylation directed PPIs, we trained five RF models masking different features each time and we evaluated the overall contribution of the attributes in terms of information gain. More specifically, we built the model on the 14 features and we determined the significance of each variable in the predictions using the varImp function from the random forest classifier. This method tracks the changes in model statistics for each predictor and accumulates the reduction in the statistic when each predictor’s feature is added to the model. This total reduction is used as the variable importance measure. We conducted the prediction process while masking different features and we observed the changes in the performance metrics of the model. We created five datasets, starting with 14 variables and then we removed them one by one in order of their importance . ROC curves analysis of the different datasets showed that using 14 features, AUC has the higher value; while removing them, the AUC is reducing . This finding indicates the importance of the hyperbolic properties and centrality measures in classifying the directed (de-) phosphorylation related PPIs. The prediction model was applied to the entire set of edges. As we predict directional PTM-PPIs (with an effector and a target) all edges were tested in both directions for a total of n = 2 x 186,198 = 372,396 evaluations. The model produces a probability of the interaction being a directed PTM-PPI or not. A total of 117,655 directed interactions received a score>= 0.5 and 6,790 a score>= 0.9 . The table of predictions allows easily to find the best predictions as target or effector for every protein in the network. For example, MAPK3 (MAP kinase-activated protein kinase 3; UniProt ID MAPK3_HUMAN) has a total of 9 edges; none of them were part of the experimentally verified set used in the training. Regardless, the predictions make sense: 8/9 have a score>= 0.5 for MAPK3 as effector (the best one is for HSPB1 as a target; score = 0.942), and there is only one with a score>= 0.5 for MAPK3 as target (with PRKY; PRKY_HUMAN), which is precisely the one with score < 0.5 for MAPK3 as effector. PRKY is a putative serine/threonine protein kinase with very little experimental information and its prediction as effector over MAPK3 is modest (score = 0.59), but the fact that it is annotated as protein kinase makes the prediction plausible. We see that the classifier has trouble assigning the correct direction of the interaction. For example, the edge CHK2 (CHK2_HUMAN) RB (retinoblastoma; RB_HUMAN), which was used as positive for training, is highly evaluated with RB as target (score = 0.968), but also with RB as effector (score = 0.78). This indicates that the predictions need to be taken with care but also suggests that the scores can be compared. To evaluate whether our predictions collectively are meaningful from a biological point of view, we performed a Gene Ontology (GO) enrichment analysis of proteins predicted even once as effectors (n = 12,115, ). Even considering that this is a large number of proteins, regarding GO Biological Process terms, these proteins are enriched in terms such as “ubiquitin-dependent protein catalytic process” , “protein ubiquitination ”, “protein phosphorylation” , and “ cellular protein modifications ”. Additionally, GO Molecular Function terms like “protein serine/threonine kinase activity” , “protein kinase binding” and “kinase binding” are also enriched. For comparison, we computed the enrichment for proteins predicted even once as not being effectors (n = 13,784; the two lists overlap in 10,314 proteins) and most of these terms received less significant p-values. This functional analysis supports the good performance of our prediction model.
To add support to the predictions of our model, we verified which of our predictions were detected by two alternative approaches: PhosD and Phosformer-ST (see Methods for details). PhosD is a kinase-substate prediction tool based on protein domains. Phosformer-ST is a machine learning tool that uses serine/threonine phosphorylation sites comprised of 15-mer peptides, assigning scores for these regions. A total of 788 and 535 predictions were supported by PhosD and Phosformer-ST, respectively, with an overlap of the three methods for 24 predictions. The detailed prediction results can be found in .
The results presented above suggest that the hyperbolic mapping of the hPIN provides a predictive value for direct PTM-PPIs. However, the interpretation of individual prediction scores remains complex. Therefore, we hypothesized that these predictive insights can collectively help us understand perturbations of the hPIN, which could be particularly valuable in identifying therapeutic mechanisms for human diseases. This is especially relevant for complex neurodegenerative diseases, in which the normal interactome is reportedly disrupted and abnormal PTMs can promote a plethora of pathological events. To test this hypothesis, we analyzed proteomics data from a SCA1 cellular model, in which the hPIN is perturbed due to the accumulation of inclusions of polyQ-expanded ataxin-1. In particular, proteome alterations driven by the accumulation of mutant ataxin-1 were studied in Tet-On YFP-ATXN1(Q82) MSCs, a previously characterized cellular model of protein aggregation . These cells contain insoluble intranuclear inclusions of polyQ-expanded ataxin-1 with a β-sheet conformation, an event that characterizes late-stage SCA1. Global proteome profiling was performed in inclusions-containing cells (SCA1, n = 3) and control cells (CTL, n = 3; see Methods for details). As a result, 3,926 proteins were identified and 3,179 of them were quantified in all six samples. The two conditions were efficiently discriminated by principal component analysis, using as a criterion the variance in protein representation in each group . To create a representative protein network for SCA1 cellular pathology, we filtered 805 dysregulated proteins by a | log2 FC | ≥ 0.5 and adj. p-value ≤ 0.05 and retrieved their high confidence interaction scores using the HIPPIE platform . As a result, a complex PPI network of 636 significantly dysregulated proteins was generated, representing proteome alterations due to the accumulation of polyQ-expanded ataxin-1 inclusions . The PPI network was further divided into smaller communities of densely interacting proteins, which outline functional modules (see Methods for details). Ataxin-1 was detected in the largest community (C1), which was highly associated with neurodegeneration and neuronal-related terms . Enrichment analysis for biological pathways on the next 4 largest communities revealed a significant implication of spliceosome, lysosome and ribosome, as well as metabolic pathways (C2-C5, respectively) . Interestingly, clustering and analysis of a control PPI network generated from a randomly selected protein dataset (n = 805) did not result in similar enrichment terms, indicating that the SCA1 PPI network and sub-communities are not generic but strongly associate with polyQ aggregation.
To date, the effect of PTM-PPIs on the aggregation of mutant ataxin-1 remains unknown. Therefore, we sought to identify potential PTM-PPIs of ataxin-1 which may be involved in polyQ protein aggregation and eventually SCA1 pathogenesis. In the SCA1 PPI network, ataxin-1 directly interacted with 21 proteins. Implementation of our algorithm suggested that 13 of them may have a post-translational modification activity. Specifically, four of these proteins (gene names: ANP32A , EIF3F , GSPT1 and USP7 ) were downregulated, while nine of them (gene names: DNAJB6 , HSPB1 , PHPT1 , SNCA , SQSTM1 , SUMO1 , TBL1XR1 , TRIP6 and TPM3 ) were upregulated in SCA1 cells . The enzymatic activity of phosphohistidine phosphatase 1 ( PHPT1 ), small ubiquitin-related modifier 1 ( SUMO1 ), Ubiquitin-specific-processing protease 7 ( USP7 ) and the chaperone proteins ( SNCA , HSPB1 and DNAJB6) have been previously described, while no such information exists for the rest (7/13) of the predicted proteins . PTM-PPIs of ataxin-1 mainly clustered in community C1 (associated with neurodegeneration, eight proteins) and to a lesser extent in C2 (associated with spliceosome; four proteins) and C4 (associated with lysosome; one protein) . In an attempt to find a link among the three major communities containing the PTM-PPIs of ataxin-1 (C1, C2 and C4), we searched for potential common upstream regulators. To do so, the proteins of each cluster were considered substrates and were annotated using the KEA3 database for the prediction of regulatory kinases (see Methods for details). According to the results, 21 kinases were identified as potential common regulators for all three communities. Interestingly, three of them ( MAPK1 , MAPK3 and CDK4 ) were indeed significantly dysregulated in SCA1 cells , suggesting their potential impact on regulating the C1, C2 and C4 communities. These kinases interact with various components of clusters C1 and C2 but not with proteins of C4, while none of them directly interacts with ataxin-1 . Remarkably, no significantly dysregulated kinases were identified when repeating the analysis for a randomly sampled test PPI network, underscoring the specificity of the identified kinases to the SCA1-related network.
SCA1-related cellular pathology might be driven by a few specific components scattered within the disease PPI network. To address this hypothesis, we generated a sub-network consisting of ataxin-1, its predicted PTM-PPIs (n = 13) and their three common upstream kinases (MAPK1, MAPK3 and CDK4) ( ; see Methods for details). Interestingly, all three identified kinases were connected to ataxin-1 through α-synuclein ( SNCA ), a protein associated with several neurodegenerative diseases and particularly Parkinson’s disease . Enrichment analysis for rare diseases (see Methods for details) indicated that this sub-network is associated with cerebellar degeneration terms, including Spinocerebellar Ataxia and the formation of nuclear inclusion bodies. This result suggests that these proteins are critical for the disease and their dysregulation may underlie SCA1-related pathological events . Therefore, restoration of their dysregulation pattern might mitigate disease progression. To this end, we searched for candidate drugs potent to increase the levels of downregulated proteins and decrease those of upregulated ones. Their reverse score indicated the overlap between the input proteins and the altered signature after drug administration. Hits with at least a 25% reverse score were considered significant candidates (see Methods for details). Then, they were sorted by descending combined score, considering the reverse score, p-value and Z-score. From this analysis, we identified four known drugs (artesunate, linifanib, budesonide and betamethasone) and three novel compounds (BRD-K54687541, BRD-K71265179 and BRD-A08662020) as potential treatment approaches . These agents might mitigate polyQ-expanded ataxin-1-associated neuropathology in SCA1 cells potentially leading to the development of novel therapeutic strategies against the disease.
Machine learning has shown good performance in extracting rules from massive biological data. Here we present a computational method that implements machine learning based on the random forest algorithm and trains a model to predict directed PTM-PPIs, concretely, phosphorylation and dephosphorylation interactions between a target and an effector. Several lines of work approach PPI prediction through various computational methods but currently not much research has been performed on predicting the function of PPIs. The representation of the human protein interaction network in the two-dimensional hyperbolic plane has been shown to be both meaningful and useful: inferred node coordinates uncover information about protein evolution and function, whereas hyperbolic distances can be used to identify potential protein interactions . In this study, we report another scenario, in which hyperbolic properties together with metrics from network analysis are used to predict directed PTM-PPIs. The result that the angular (theta) coordinates of targets and effectors were the more predictive features is of particular relevance, considering that they are superior to network measures that are not depending on the hyperbolic mapping. The fact that theta of target is more predictive that than that of the effector is consistent with targets being responsible of narrower functions (signaling, cell cycle control, cell differentiation), while effectors, with a more upstream position on a regulatory network, could be expected to have more general functions, and therefore, less restrictions to take various angular positions in the hyperbolic map as core components of signaling cascades . Our results evaluate various centrality measures suggesting that while all of them have predictive value, degree centrality may be the less informative compared to eigenvector, closeness and betweenness, at least, when evaluating phosphorylation and dephosphorylation. The complete list of predicted effectors turned out to be explainable from a biological point of view according to functional enrichment analysis. To illustrate how to use the collective predictions for studying disease progression, we predicted PTM-PPIs in a SCA1 disease model with intense hPIN perturbation. The cell model employed here recapitulates key pathological features of SCA1, one of several polyQ diseases that are caused by expanded CAG repeats encoding a long polyQ tract in the respective proteins and lead to neurodegeneration . Two possible factors contributing to selective neuronal impairment are the abnormal subcellular localization of polyQ proteins and the change of folding and function . PTMs are shown to regulate properties including their intracellular localization and functions . Consequently, understanding the effect of PTMs in polyQ diseases may yield important insight into mechanisms behind neuronal damage and more specifically in SCA1. We identified proteins that could be at the center of dysregulated phosphorylation networks and generated a disease-specific PPI network enriched with our PTM-PPI predictions. Among them, we identified α-synuclein (SNCA), a protein that translocates between the cytoplasm and the nucleus. SNCA may also function as a chaperone as it shares physical and functional homology with the 14-3-3 protein family which are responsible for ataxin-1 translocation from the cytoplasm to the nucleus . Interestingly, SNCA is also involved in the pathogenesis of Parkinson’s disease and may impart its effect in line with the key upstream kinases of the SCA1 disease network . To evaluate the translational impact of the predicted outcome, we searched for existing drugs that could revert the production of these proteins as potential therapeutics against the disease. By implementing a network-based drug repurposing analysis, we identified 7 potential drug candidates. Artesunate was the top hit, acting as a protein synthesis inhibitor and a glucocorticoid receptor agonist. We have previously shown that the accumulation of mutant ataxin-1 disrupts ribosome assembly and causes proteome instability . Therefore, regulation of translation may have a therapeutic effect in SCA1 cells. Interestingly, artesunate is currently evaluated as a therapeutic agent for Friedreich’s ataxia (FA), suggesting that it might be also relevant for the treatment of other similar disorders, including SCA1 . Furthermore, the predicted drug candidates betamethasone and budesonide also act as glucocorticoid receptor agonists. Although there is no direct evidence for SCA1, activation of glucocorticoid receptors seems to attenuate the aggregation of polyQ-expanded ataxin-3 and huntingtin proteins in SCA3 and Huntington’s disease (HD), respectively . Our work supports the value of the hPIN and of their hyperbolic mapping for the prediction of the function of directed PTM-PPIs. The method was limited to detect phosphorylation and dephosphorylation, the most common PTMs, as directed interactions between a regulatory protein and its target; more complex interactions can be expected since regulatory proteins are often multi-domain proteins with a multiplicity of sites for their own regulation: our approach cannot be expected to capture those without an appropriate training dataset. In addition, we focused on phosphorylation and dephosphorylation, without making a distinction between them and, most importantly, without considering other less frequent but relevant PTMs such as ubiquitination, methylation or acetylation. Even within these limitations, we were able to apply our predictions to provide a proteome-wide set of scored interactions that we used to suggest therapeutic actions against a neurodegenerative disease. Our predictions should find applicability in combination with many other experimental and computational datasets.
S1 Fig Identification of big gaps between inferred protein angles. Proteins were sorted increasingly by their inferred angular coordinates θ and the difference between θi and θi + 1 was computed. The peaks correspond to gap sizes in the angular dimension and hint at the presence of similarity-based clusters. To determine the beginning and end of each cluster in the hPIN, we chose the gap size (g = 0.0077, line in red color) that produced clusters with a minimum of three components. The same process was followed to subcluster the first sector into 15 smaller clusters using a smaller gap size (g = 0.042, line in blue), This allowed us to perform meaningful enrichment analysis of each group of proteins. (JPG) S2 Fig Evaluation of the model. ( A ) Accuracy for all the models after 5- fold cross validation repeated 10 times. ( B ) The ROC of the model confirms a satisfactory classification performance. ( C ) Precision-Recall curve, providing additional performance evaluation. (JPG) S3 Fig ROC curves and AUC scores to compare the classifier performance of the different data sets containing various number of features. In total 14 features related to hyperbolic properties and centrality measures were used to predict phosphorylation and dephosphorylation directed PPIs. (JPG) S4 Fig Profile of dysregulated proteins in SCA1 cells containing polyQ inclusions. (A) SCA1 cells were efficiently discriminated from control cells (CTL) using PCA. (B) Volcano plot depicting 449 significantly downregulated proteins (blue color) and 356 significantly upregulated proteins (red color) in SCA1 cells [selection criteria (log2FC ≤ | 0.5 | , adj. p-value ≤ 0.05)]. The top 10 dysregulated proteins are highlighted in the plot. (C) Heatmap plot according to Euclidean distance indicates two distinct groups of up- and down-regulated proteins (red and blue color, respectively) in SCA1 and control cells. (JPG) S5 Fig Connectivity of dysregulated proteins in SCA1 cells. (A) PPI network of significantly dysregulated proteins in SCA1 cells. Proteins were clustered into dense communities representing functional modules. Enrichment analysis on each cluster indicated a strong association with neurodegeneration (C1), spliceosomal (C2) and lysosomal (C4) activity, ribosome assembly (C3) and metabolic pathways (C5). Ataxin-1 directly interacts with 21 proteins, 13 of which are predicted as PTM-PPIs and participate in C1, C2 and C4. Both PTM and non-PTM PPIs of ataxin-1 are highlighted with red and yellow color, respectively. (B) Identification of regulatory kinases for C1, C2 and C4 clusters, which contain the PTM-PPIs of ataxin-1. MAPK1, MAPK3 and CDK4 are significantly dysregulated in SCA1 cells. (JPG) S1 Table Nodes of the hPIN. Columns indicate protein identifiers (UniProtKB), hyperbolic coordinates (r, theta), and centralities (Degree DC, Betweenness BC, Closeness CC and Eigenvector EC). (XLSX) S2 Table Edges of the hPIN. Columns indicate protein identifiers (UniProtKB; p1, p2), hyperbolic distance, r difference. (XLSX) S3 Table Training dataset of experimentally known phosphorylation and dephosphorylation PPIs. Columns indicate effector protein identifier (UniProtKB; p1), effector type, and target protein identifier (UniProtKB; p2). (XLSX) S4 Table Prediction scores of directed PTM-PPIs. Columns indicate predicted effector and target protein identifiers (UniProtKB; p1, p2), score of the prediction of our method and classification of our method, PhosD and Phosformer-ST. (XLSX) S5 Table Significantly dysregulated proteins (|log2FC | ≤ 0.5, adj. p-value ≤ 0.05) in SCA1 (805 proteins). Columns indicate protein identifier (UniProtKB ID and gene name), log2FC value, p-value, adjusted p-value and cluster number of the proteins SCA1 PPI network: values are (i) C1-C5 or unclustered for the 636 strongly connected proteins or (ii) blank for the remaining 169 less connected proteins. (XLSX) S6 Table Components of a critical PPI network involved in SCA1 pathogenesis. Columns indicate protein identifiers (UniProtKB ID and gene name), log2FC value and adjusted p-value. (XLSX)
|
Practices and attitudes of adult psychiatrists regarding methamphetamine-associated psychotic disorder: an internet based survey conducted in Turkey | 85d3301e-40c4-4dbc-a7b4-1323c5757377 | 11699667 | Psychiatry[mh] | According to United Nations Office on Drugs and Crime (UNODC) World Drug Report published in 2023, an estimated 36 million people used amphetamines in 2021, representing 0.7 per cent of the global population. While the prevalence of use is highest in North America, the largest number of users of amphetamines are found in East and South-East Asia. Record-high quantities of amphetamine-type stimulants were seized in 2021, dominated by methamphetamine at the global level . The most often used form of methamphetamine is crystal, which has strong addictive effects and is typically smoked, injected, or inhaled. Due to the lipophilic nature of methamphetamine, when it is administered, it quickly crosses the blood–brain barrier and enters the bloodstream before penetrating the brain. Methamphetamine's half-life varies depending on how it is absorbed, however it typically lasts five to thirty hours. Due to the rapid onset and termination of its effects, methamphetamine users may need repeated doses . Repeated use of methamphetamine, which also has a place in the second-line treatment of attention-deficit/hyperactivity disorder, severe obesity and narcolepsy, under uncontrolled conditions can lead to methamphetamine use disorder (MUD) . Methamphetamine use has various neuropsychiatric complications. Increased alertness, irritability, loss of appetite, and overconfidence are psychiatric symptoms that are more common, especially at low doses. When used in high doses, it can cause fear, restlessness, anxiety, panic attack, psychomotor agitation, and various psychotic symptoms. Prominent psychotic symptoms in methamphetamine-associated psychotic disorder (MAP) include ideas of reference, tactile and auditory hallucinations, increased activity, odd speech, and paranoid delusions . The Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, Text Revision (DSM‐5-TR) defines a substance‐induced psychotic disorder as the presence of hallucinations and delusions developed during, or soon after, intoxication or withdrawal from a substance or medication known to cause psychotic symptoms, such as methamphetamines, and the presence of psychotic symptoms not mediated by another nonsubstance‐induced psychotic disorder that persists longer than one month after substance intoxication or withdrawal . Many psychiatric symptoms are similar in paranoid schizophrenia and MAP . However, MAP has aspects that differentiate it from primary psychotic disorder and other drug-associated psychotic disorders. When methamphetamine usage persists, psychotic symptoms usually get worse over time . It was once believed that methamphetamine withdrawal symptoms would dissipate in a week. Studies have revealed that while most MAP patients have symptom resolution within a month, 30% of MAP patients experienced symptom persistence up to six months, and 10–28% reported symptom persistence longer than six months. Symptoms of MAP have been shown to relapse after long periods of abstinence . According to the DSM-5-TR, a persistent psychosis that persists six months after quitting methamphetamine may be diagnosed as schizophrenia . Methamphetamine use is associated with a prevalence of psychotic symptoms ranging from 10 to 60%, indicating the possibility of unique neurobiological dysregulations in MAP patients. According to literature, the disorder can appear anywhere from 1.7 and 5.2 years after methamphetamine usage begins . Studies reporting that even a single use of methamphetamine may cause psychotic symptoms indicate that this period should continue to be investigated in future studies . Methamphetamine seems to mainly impact the mesocortical, mesolimbic, and nigrostriatal dopaminergic pathways. Methamphetamine metabolism inhibits both the vesicular monoamine transporter and the dopamine transporter, which affects dopamine transmission in the central nervous system. Dopamine concentrations rise and may even become neurotoxic when these proteins are inhibited. Glutamate and dopamine signalling are subsequently elevated as a result of altered polysynaptic connections between various dopaminergic systems brought on by elevated dopamine concentrations. After long-term usage, dopaminergic receptor density and function are altered, particularly in the striatum and mesolimbic system. This interferes with feed-forward processes and causes sensitization and addiction . The mechanism by which methamphetamine causes psychosis has not yet been clearly elucidated. There have been discussions about the validity of a number of methamphetamine-associated animal models for psychosis, including the behavioral sensitization model, the neurotoxicity model, and the escalating dose-binge model . Studies reveal that gamma-aminobutyric acidergic interneurons may be overloaded by excessive dopamine signalling, which could cause dopamine systems to become dysregulated and perhaps result in psychotic symptoms. This glutamate dysregulation may be brought on by increased neurotoxicity and damage to cortical interneurons, which can degrade N-methyl-D-aspartate receptors and cause damage to the cortex. This damage to the brain can then result in MAP-related symptoms . Worldwide, seizures of methamphetamine have increased five-fold over the previous decade, while seizures of cocaine, cannabis, opioids, and opiates have not changed significantly. UNODC highlights the geographical spread of methamphetamine trafficking. Methamphetamine use and manufacture continues to expand from traditional markets such as South-East Asia to new markets such as Western Europe. The recent increase in methamphetamine use and production in Afghanistan causes increasing concerns in Turkey, which serves as a geographical bridge between Asia and Europe . For these reasons, methamphetamine use and MAP are increasingly problematic for Turkey and require urgent intervention approaches. Psychiatrists play a major role in the management of MAP-related psychiatric problems. As mentioned above, methamphetamine use characteristics have changed over the years and the number of users has increased . Therefore, it is possible that current psychiatrists have not encountered MAP cases during their specialty training. Additionally, some educational institutions have an inpatient unit, while others do not. Because some inpatient units do not provide adequate conditions, patients with psychotic features cannot be hospitalized. In mental health and disease hospitals, MAP hospitalizations are frequently performed in the form of voluntary and involuntary hospitalizations. In other words, there may be significant differences between treatment approaches acquired in educational institutions with different characteristics at different times. In this case, it is inevitable to experience events in which differences in treatment approaches can be described as inadequacy rather than wealth. It is almost impossible to reach psychiatrists face to face, working anywhere in Turkey to gather their opinions on the current issue and create a road map for MAP management. In addition, people may avoid participating in scientific research during the day or during working hours, or even if they participate, they may be careless in complying with research instructions. Internet offers various tools that we can use to overcome these difficulties that may be encountered in scientific data collection processes. Internet-based survey tools such as Google Forms make it much easier to reach target groups in research. These forms can be created and applied free of charge. In this study, it was aimed to reach psychiatrists actively working in Turkey through an internet-based survey form created using Google Forms and to determine their approaches to MAP treatment. Our hypothesis is that psychiatrists' psychiatry training characteristics and current working conditions affect their approaches to MAP treatment. Based on the results of this study, it will pave the way for organizing in-service training related to MAP management for psychiatrists.
This was an internet-based, quantitative, cross-sectional, psychiatrist approach-based observational survey. The current survey was conducted by randomly distributing an internet questionnaire to psychiatrists included in Yahoo and WhatsApp groups. Sampling frame Medical education in Turkey lasts six years and those who complete the process receive the title of general practitioner. Until 1973, there was no distinction between adult and child and adolescent psychiatry in Turkey. In 1973, child and adolescent psychiatry specialization was first organized in Turkey as a two-year subspecialty after psychiatry specialization. After 1990, child and adolescent psychiatry was transformed into a four-year major specialization, and thus adult and child and adolescent psychiatry were transformed into separate specialization areas. As a result, in Turkey, the branch of medicine that deals with the mental health and disorders of individuals aged 18 and under is called child and adolescent psychiatry, while the branch of medicine that deals with all individuals over the age of 18 is called adult psychiatry . Psychiatry specialization training can be provided by mental health and disease hospital, university hospital, city hospital, and training and research hospital. Following the medical specialization exam, a four-year adult psychiatry residency process is followed and the title of adult psychiatry specialist is obtained after the publication of the medical specialization thesis. In Turkey, specialists (medical doctor) are required to work anywhere in the country for periods ranging from 300 to 600 days. After this period is completed, psychiatrists have the right to continue working for the government or to transfer to the private clinic. The population of this study included all psychiatrists working actively in Turkey. All psychiatrists included in this study were medical doctors, specialized in psychiatry, and clinicians actively following-up and treating patients diagnosed psychiatric disorders. Concepts and institutional processes The most prominent centres in the treatment of substance use disorders in our country are the Alcohol and Drug Addiction Research, Treatment, and Training Centre (AMATEM) and they have been serving since the 1980s in Turkey. MAP follow-up and treatment is carried out in outpatient or inpatient AMATEM clinics or closed psychiatric wards. Hospitalization in the treatment of MAP can be performed voluntarily or involuntarily based on articles 432–437 of the Turkish Civil Code (TCC) obtained from local courts. In this study, the concept of partial or complete involvement of the participants in the treatment of any MAP case is frequently mentioned. By the concept of complete involvement, it is meant situations in which psychotic symptoms are completely eliminated and the patient achieves remission, starting from the first admission of a MAP episode. There is no requirement for the participant to carry out this described treatment process alone. The treatment processes in which he was involved as part of a team consisting of more than one psychiatrists were accepted as his own experience. This information is explained in detail in the introduction to the internet survey. The concept of “working duration in psychiatry” indicates the participant's year in psychiatry. For example, a 2-month psychiatric resident is considered to be in the 1st year. In Turkey, psychiatrists can work in university hospitals, training and research hospitals, city hospitals, provincial state hospitals, district state hospitals, mental health and disease hospitals, community mental health centres, private clinics, and private hospitals. Inpatient treatment units are mostly located in university hospitals, training and research hospitals and mental health and disease hospitals. In city hospitals, there are mostly inmate forensic psychiatry inpatient units and high security forensic psychiatry inpatient units. Community mental health centres are day care centres. In provincial and district state hospitals, there is usually no inpatient treatment unit and it functions as an outpatient clinic. Closed psychiatric wards are only available in mental health and disease hospitals, and with the decision of TCC 432–437, involuntary hospitalizations are only carried out in these hospitals. The number of mental health and disease hospitals in Turkey is 11. All of the institutions mentioned in this study, except the private clinic and private hospital, are managed by the state. Long-acting injectable (LAI) antipsychotics can be used in the treatment of MAP. In this study, LAI antipsychotics available in Turkey were questioned. These are once-monthly paliperidone palmitate (PP1M), zuclopenthixol decanoate depot, risperidone consta, haloperidol decanoate, aripiprazole maintena. Sample size calculation When the literature was examined, it was seen that there was no study examining psychiatrists' approaches to MAP in the world or in Turkey. It is estimated that the number of actively working psychiatrists in Turkey is approximately 6000 . In order to determine the rate of psychiatrists participating in the treatment process of at least one MAP case, data from the city where the first author was located (Elazığ) were taken into account. Almost all psychiatrists ( n = 45) working in Elazığ province were contacted and asked whether they had ever followed-up a MAP case. The rate of those who said yes was determined as 78%. Using a population size of 6000 and 78% as the population proportion of MAP follow-up at 5% margin of error and 95% confidence level, a sample of 253 participants would achieve adequate power for this study. Development of questionnaire The survey draft was developed in collaboration with all authors, who have academic and clinical experience in the field of MAP, and was finalized by the first author. The survey was developed using the literature, clinical experiences, psychiatric training in Turkey, and regional substance use patterns were taken into consideration. While creating sociodemographic data, literature was used. While creating the items questioning MAP-related situations and clinicians' approaches to MAP treatment processes, clinical experience was used. In addition to clinical experience, literature data was used in the creation of items for the treatment of MAP symptoms. The main justification for using clinical experience was the lack of sufficient information on the relevant subject in the literature. Leading questions were avoided, and all of the questions had a neutral content. Turkish was the language used for the survey. In the first section of the survey (landing page), informed consent was obtained. Those who wanted to participate in the survey were directed to the second section regarding group selection (group 1, 2, 3). Participants included in the group referred to as group 1 in the text were directed to the last section including questions about sociodemographic and training variables (10 items). Participants referred to as group 2 and group 3 were directed to the last section where their approaches to MAP treatment were questioned in addition to sociodemographic and training variables (59 items). We piloted the survey and made additional revisions in response to input from eighteen psychiatrists. Participants working in Turkey received the Google Form-created survey over Yahoo and WhatsApp groups. The survey used in this study was developed solely for this study and has not been previously published elsewhere (Appendix 1). Recruitment procedure, inclusion and exclusion criteria While carrying out the recruitment procedure and sample selection, the directives explained in detail in Örüm 's study were applied. Study title, purpose, scope, definition and diagnostic criteria of MAP, ethics committee approval, and form filling time were some of the information included in the landing page. The explanations on the landing page can be accessed via Appendix 1. Three groups were formed based on the answers obtained (group 1, group 2, group 3). Estimated mean completion time for option 1 survey was approximately 1–2 min, for option 2 and 3 was 6–8 min. The survey was open from October 8, 2023 – November 6, 2023. Each researcher assessed the survey responses independently and collectively. Only adult psychiatrists were included in the study, and the word psychiatrist used anywhere in the text refers to adult psychiatrists. Data extraction, data security, statistical analysis The internet-based survey was hosted on the Google Forms platform, a secure end-to-end encrypted form builder for free to create online forms that capture classified data. Data was downloaded and stored on Microsoft Excel, an application for managing online surveys and databases. The data was shared only with the authors of the study for analysis and interpretation purposes. It was not possible to access the data except through the authors. It was impossible to access the identity of any participant based on the study findings. All analyses were performed using IBM SPSS Statistics version 22.0. Descriptive statistics and continuous variables were given as mean ± standard deviation, and categorical variables were given as frequency and percentage. The Chi-square test was used to compare the categorical data between the groups and genders. Binary logistic regression analysis was used in group prediction. In regression analysis, the grouping variable (group 2 and group 3) was accepted as the dependent variable, sociodemographic and clinical parameters as the independent variable. The suitability of the independent variable to the model was checked through the Hosmer and Lemeshov test. A p value of less than 0.05 was set as statistical significance.
Medical education in Turkey lasts six years and those who complete the process receive the title of general practitioner. Until 1973, there was no distinction between adult and child and adolescent psychiatry in Turkey. In 1973, child and adolescent psychiatry specialization was first organized in Turkey as a two-year subspecialty after psychiatry specialization. After 1990, child and adolescent psychiatry was transformed into a four-year major specialization, and thus adult and child and adolescent psychiatry were transformed into separate specialization areas. As a result, in Turkey, the branch of medicine that deals with the mental health and disorders of individuals aged 18 and under is called child and adolescent psychiatry, while the branch of medicine that deals with all individuals over the age of 18 is called adult psychiatry . Psychiatry specialization training can be provided by mental health and disease hospital, university hospital, city hospital, and training and research hospital. Following the medical specialization exam, a four-year adult psychiatry residency process is followed and the title of adult psychiatry specialist is obtained after the publication of the medical specialization thesis. In Turkey, specialists (medical doctor) are required to work anywhere in the country for periods ranging from 300 to 600 days. After this period is completed, psychiatrists have the right to continue working for the government or to transfer to the private clinic. The population of this study included all psychiatrists working actively in Turkey. All psychiatrists included in this study were medical doctors, specialized in psychiatry, and clinicians actively following-up and treating patients diagnosed psychiatric disorders.
The most prominent centres in the treatment of substance use disorders in our country are the Alcohol and Drug Addiction Research, Treatment, and Training Centre (AMATEM) and they have been serving since the 1980s in Turkey. MAP follow-up and treatment is carried out in outpatient or inpatient AMATEM clinics or closed psychiatric wards. Hospitalization in the treatment of MAP can be performed voluntarily or involuntarily based on articles 432–437 of the Turkish Civil Code (TCC) obtained from local courts. In this study, the concept of partial or complete involvement of the participants in the treatment of any MAP case is frequently mentioned. By the concept of complete involvement, it is meant situations in which psychotic symptoms are completely eliminated and the patient achieves remission, starting from the first admission of a MAP episode. There is no requirement for the participant to carry out this described treatment process alone. The treatment processes in which he was involved as part of a team consisting of more than one psychiatrists were accepted as his own experience. This information is explained in detail in the introduction to the internet survey. The concept of “working duration in psychiatry” indicates the participant's year in psychiatry. For example, a 2-month psychiatric resident is considered to be in the 1st year. In Turkey, psychiatrists can work in university hospitals, training and research hospitals, city hospitals, provincial state hospitals, district state hospitals, mental health and disease hospitals, community mental health centres, private clinics, and private hospitals. Inpatient treatment units are mostly located in university hospitals, training and research hospitals and mental health and disease hospitals. In city hospitals, there are mostly inmate forensic psychiatry inpatient units and high security forensic psychiatry inpatient units. Community mental health centres are day care centres. In provincial and district state hospitals, there is usually no inpatient treatment unit and it functions as an outpatient clinic. Closed psychiatric wards are only available in mental health and disease hospitals, and with the decision of TCC 432–437, involuntary hospitalizations are only carried out in these hospitals. The number of mental health and disease hospitals in Turkey is 11. All of the institutions mentioned in this study, except the private clinic and private hospital, are managed by the state. Long-acting injectable (LAI) antipsychotics can be used in the treatment of MAP. In this study, LAI antipsychotics available in Turkey were questioned. These are once-monthly paliperidone palmitate (PP1M), zuclopenthixol decanoate depot, risperidone consta, haloperidol decanoate, aripiprazole maintena.
When the literature was examined, it was seen that there was no study examining psychiatrists' approaches to MAP in the world or in Turkey. It is estimated that the number of actively working psychiatrists in Turkey is approximately 6000 . In order to determine the rate of psychiatrists participating in the treatment process of at least one MAP case, data from the city where the first author was located (Elazığ) were taken into account. Almost all psychiatrists ( n = 45) working in Elazığ province were contacted and asked whether they had ever followed-up a MAP case. The rate of those who said yes was determined as 78%. Using a population size of 6000 and 78% as the population proportion of MAP follow-up at 5% margin of error and 95% confidence level, a sample of 253 participants would achieve adequate power for this study.
The survey draft was developed in collaboration with all authors, who have academic and clinical experience in the field of MAP, and was finalized by the first author. The survey was developed using the literature, clinical experiences, psychiatric training in Turkey, and regional substance use patterns were taken into consideration. While creating sociodemographic data, literature was used. While creating the items questioning MAP-related situations and clinicians' approaches to MAP treatment processes, clinical experience was used. In addition to clinical experience, literature data was used in the creation of items for the treatment of MAP symptoms. The main justification for using clinical experience was the lack of sufficient information on the relevant subject in the literature. Leading questions were avoided, and all of the questions had a neutral content. Turkish was the language used for the survey. In the first section of the survey (landing page), informed consent was obtained. Those who wanted to participate in the survey were directed to the second section regarding group selection (group 1, 2, 3). Participants included in the group referred to as group 1 in the text were directed to the last section including questions about sociodemographic and training variables (10 items). Participants referred to as group 2 and group 3 were directed to the last section where their approaches to MAP treatment were questioned in addition to sociodemographic and training variables (59 items). We piloted the survey and made additional revisions in response to input from eighteen psychiatrists. Participants working in Turkey received the Google Form-created survey over Yahoo and WhatsApp groups. The survey used in this study was developed solely for this study and has not been previously published elsewhere (Appendix 1).
While carrying out the recruitment procedure and sample selection, the directives explained in detail in Örüm 's study were applied. Study title, purpose, scope, definition and diagnostic criteria of MAP, ethics committee approval, and form filling time were some of the information included in the landing page. The explanations on the landing page can be accessed via Appendix 1. Three groups were formed based on the answers obtained (group 1, group 2, group 3). Estimated mean completion time for option 1 survey was approximately 1–2 min, for option 2 and 3 was 6–8 min. The survey was open from October 8, 2023 – November 6, 2023. Each researcher assessed the survey responses independently and collectively. Only adult psychiatrists were included in the study, and the word psychiatrist used anywhere in the text refers to adult psychiatrists.
The internet-based survey was hosted on the Google Forms platform, a secure end-to-end encrypted form builder for free to create online forms that capture classified data. Data was downloaded and stored on Microsoft Excel, an application for managing online surveys and databases. The data was shared only with the authors of the study for analysis and interpretation purposes. It was not possible to access the data except through the authors. It was impossible to access the identity of any participant based on the study findings. All analyses were performed using IBM SPSS Statistics version 22.0. Descriptive statistics and continuous variables were given as mean ± standard deviation, and categorical variables were given as frequency and percentage. The Chi-square test was used to compare the categorical data between the groups and genders. Binary logistic regression analysis was used in group prediction. In regression analysis, the grouping variable (group 2 and group 3) was accepted as the dependent variable, sociodemographic and clinical parameters as the independent variable. The suitability of the independent variable to the model was checked through the Hosmer and Lemeshov test. A p value of less than 0.05 was set as statistical significance.
Sociodemographic and psychiatric training characteristics of participants Four hundred and eight participants (216 females, 192 males) were included in the study. In total ( n = 408), the mean age was 33.86 ± 6.61 years (minimum 25.00 years, maximum 59 years, and median 32.00 years). In total, the working duration in psychiatry was 7.09 ± 5.79 years (min 1 year, max 30 years, and median 6.00 years). Data from 78 participants (40 females, 38 males) in group 1 were examined. While the mean age was 30.48 ± 6.13 years (min 25 years, max 52 years), the working duration in psychiatry was 3.71 ± 5.56 years (min 1 year, max 25 years). Data from 128 participants (70 females, 58 males) in group 2 were examined. While the mean age was 32.43 ± 4.06 years (min 25 years, max 43 years), the working duration in psychiatry was 5.95 ± 3.59 years (min 1 year, max 16 years). Data from 202 participants (106 females, 96 males) in group 3 were examined. While the mean age was 36.06 ± 7.28 years (min 26 years, max 59 years), the working duration in psychiatry was 9.12 ± 6.21 years (min 1 year, max 30 years). There was a significant difference between the three groups in terms of age ( p < 0.001) and working duration in psychiatry ( p < 0.001). All three groups were significantly different from each other in terms of both age and working duration in psychiatry. Sociodemographic and clinic characteristics of groups 1, 2, and, 3 were shown in Table . Clinical approaches and experiences of group 2 and group 3 The experiences and clinical approaches to MAP of group 2 and group 3, which represent participants who were involved in the treatment process of at least one MAP case, are shown in Table . MAP treatment has its own challenges. The approaches of group 2 and group 3 to possible situations that may be encountered during the MAP treatment process are shown in Table . Psychotropic use characteristics of group 2 and group 3 Oral antipsychotic use characteristics of group 2 and group 3 are shown in Table . LAI antipsychotic use characteristics of group 2 and group 3 are shown in Table . Non-antipsychotic psychotropic use characteristics of group 2 and group 3 are shown in Table . Comparison of sociodemographic and clinical variables of group 2 and group 3 in terms of gender Participants in group 2 and group 3 ( n = 330) were compared in terms of some variables according to their gender. No significant difference was detected between genders in terms of residency institution, current institution, experience of working in a psychiatric ward, AMATEM experience, and number of MAP cases followed-up ( p > 0.05). Female and male participants’ attitudes were similar on issues such as the necessity of inpatient treatment, the need for a closed ward, and involuntary hospitalization; psychotropic preference in insomnia, LAI antipsychotic preference, typical oral antipsychotic preference, atypical oral antipsychotic preference, atypical oral antipsychotic maintenance doses, duration of antipsychotic use in maintenance treatment according to the number of episodes, intramuscular use of haloperidol/chlorpromazine/zuclopenthixol decanoate acuphase; antidepressant, mood stabilizer, benzodiazepine, modafinil, psychostimulant, intravenous diazepam, routine intravenous fluid replacement preferences; psychotropic preferences in antisocial personality pattern/suicide/homicide; encountering conditions such as delirium, neuroleptic malignant syndrome, extrapyramidal system side effects ( p > 0.05). The association between sociodemographic/clinical variables and being from group 2 and group 3 with binary logistic regression analysis Binary logistic regression analysis was applied to reveal whether sociodemographic/psychiatric training characteristics and MAP-related clinical approaches/attitudes/experiences indicate which group the psychiatrists belongs to. Binary logistic regression analysis was applied separately for each independent variable. According to the binary logistic regression analysis, the p value of age, working duration in psychiatry, outpatient/inpatient AMATEM experience, number of MAP cases followed-up, treatment guideline follow-up, LAI antipsychotic use, most common LAI antipsychotic use, LAI antipsychotic use in maintenance treatment, most common typical oral antipsychotic use, maintenance dose of olanzapine, maintenance dose of risperidone, maintenance dose of aripiprazole, maintenance dose of amisulpride, experience of haloperidol plus biperiden intramuscularly use, experience of chlorpromazine intramuscularly use, experience of zuclopenthixol decanoate acuphase use, experience of extrapyramidal system side effect, and experience of delirium was determined to be less than 0.001. Binary logistic regression analysis of these 18 variables was performed (Beginning block, −2 log-likelihood = 440.741 a , constant p < 0.001, B = 0.456, Exp (B) = 1.578; Block one, −2 log-likelihood = 146.636 a ; Cox & Snell R 2 = 0.590; Nagelkerke R 2 0.800). It was aimed to create a meaningful model with fewer variables. Variables with a Nagelkerke R 2 of 0.200 or less were removed from the model. The Nagelkerke R 2 of number of MAP cases followed-up, LAI antipsychotic use, LAI antipsychotic use in maintenance treatment, most common typical oral antipsychotic use, experience of zuclopenthixol decanoate acuphase use, experience of extrapyramidal system side effect, and experience of delirium was above 0.200. Only one variable with the highest Nagelkerke R 2 was taken from the questions on the same topic. Questions “LAI antipsychotic use in maintenance treatment” and “experience of zuclopenthixol decanoate acuphase use” were removed from the model because they questioned the same field as “LAI antipsychotic use experience in the treatment of MAP”. According to the binary logistic regression analysis of the remaining five independent variables, the question that contributed the least to the model was number of MAP cases followed-up ( p = 0.127). This variable, which was more difficult to question with more than two answer options, was removed from the model. Only twenty participant responded to zuclopenthixol oral use, which is one of the answers to “most common typical oral antipsychotic use”. When this variable was added to the regression model, the Hosmer and Lemeshov test p value remained below 0.05. Therefore, this variable was also removed from the model. As a result, a total of three independent variables were included in the model. All of these variables were two-choice questions and easy to apply. Data from the binary logistic regression model were presented in Table . According to the binary logistic regression analysis, the sensitivity of our model related to the determining the participants who was involved in complete treatment process of at least one MAP case was 81.2, and the specificity was 68.8 percent.
Four hundred and eight participants (216 females, 192 males) were included in the study. In total ( n = 408), the mean age was 33.86 ± 6.61 years (minimum 25.00 years, maximum 59 years, and median 32.00 years). In total, the working duration in psychiatry was 7.09 ± 5.79 years (min 1 year, max 30 years, and median 6.00 years). Data from 78 participants (40 females, 38 males) in group 1 were examined. While the mean age was 30.48 ± 6.13 years (min 25 years, max 52 years), the working duration in psychiatry was 3.71 ± 5.56 years (min 1 year, max 25 years). Data from 128 participants (70 females, 58 males) in group 2 were examined. While the mean age was 32.43 ± 4.06 years (min 25 years, max 43 years), the working duration in psychiatry was 5.95 ± 3.59 years (min 1 year, max 16 years). Data from 202 participants (106 females, 96 males) in group 3 were examined. While the mean age was 36.06 ± 7.28 years (min 26 years, max 59 years), the working duration in psychiatry was 9.12 ± 6.21 years (min 1 year, max 30 years). There was a significant difference between the three groups in terms of age ( p < 0.001) and working duration in psychiatry ( p < 0.001). All three groups were significantly different from each other in terms of both age and working duration in psychiatry. Sociodemographic and clinic characteristics of groups 1, 2, and, 3 were shown in Table .
The experiences and clinical approaches to MAP of group 2 and group 3, which represent participants who were involved in the treatment process of at least one MAP case, are shown in Table . MAP treatment has its own challenges. The approaches of group 2 and group 3 to possible situations that may be encountered during the MAP treatment process are shown in Table .
Oral antipsychotic use characteristics of group 2 and group 3 are shown in Table . LAI antipsychotic use characteristics of group 2 and group 3 are shown in Table . Non-antipsychotic psychotropic use characteristics of group 2 and group 3 are shown in Table .
Participants in group 2 and group 3 ( n = 330) were compared in terms of some variables according to their gender. No significant difference was detected between genders in terms of residency institution, current institution, experience of working in a psychiatric ward, AMATEM experience, and number of MAP cases followed-up ( p > 0.05). Female and male participants’ attitudes were similar on issues such as the necessity of inpatient treatment, the need for a closed ward, and involuntary hospitalization; psychotropic preference in insomnia, LAI antipsychotic preference, typical oral antipsychotic preference, atypical oral antipsychotic preference, atypical oral antipsychotic maintenance doses, duration of antipsychotic use in maintenance treatment according to the number of episodes, intramuscular use of haloperidol/chlorpromazine/zuclopenthixol decanoate acuphase; antidepressant, mood stabilizer, benzodiazepine, modafinil, psychostimulant, intravenous diazepam, routine intravenous fluid replacement preferences; psychotropic preferences in antisocial personality pattern/suicide/homicide; encountering conditions such as delirium, neuroleptic malignant syndrome, extrapyramidal system side effects ( p > 0.05).
Binary logistic regression analysis was applied to reveal whether sociodemographic/psychiatric training characteristics and MAP-related clinical approaches/attitudes/experiences indicate which group the psychiatrists belongs to. Binary logistic regression analysis was applied separately for each independent variable. According to the binary logistic regression analysis, the p value of age, working duration in psychiatry, outpatient/inpatient AMATEM experience, number of MAP cases followed-up, treatment guideline follow-up, LAI antipsychotic use, most common LAI antipsychotic use, LAI antipsychotic use in maintenance treatment, most common typical oral antipsychotic use, maintenance dose of olanzapine, maintenance dose of risperidone, maintenance dose of aripiprazole, maintenance dose of amisulpride, experience of haloperidol plus biperiden intramuscularly use, experience of chlorpromazine intramuscularly use, experience of zuclopenthixol decanoate acuphase use, experience of extrapyramidal system side effect, and experience of delirium was determined to be less than 0.001. Binary logistic regression analysis of these 18 variables was performed (Beginning block, −2 log-likelihood = 440.741 a , constant p < 0.001, B = 0.456, Exp (B) = 1.578; Block one, −2 log-likelihood = 146.636 a ; Cox & Snell R 2 = 0.590; Nagelkerke R 2 0.800). It was aimed to create a meaningful model with fewer variables. Variables with a Nagelkerke R 2 of 0.200 or less were removed from the model. The Nagelkerke R 2 of number of MAP cases followed-up, LAI antipsychotic use, LAI antipsychotic use in maintenance treatment, most common typical oral antipsychotic use, experience of zuclopenthixol decanoate acuphase use, experience of extrapyramidal system side effect, and experience of delirium was above 0.200. Only one variable with the highest Nagelkerke R 2 was taken from the questions on the same topic. Questions “LAI antipsychotic use in maintenance treatment” and “experience of zuclopenthixol decanoate acuphase use” were removed from the model because they questioned the same field as “LAI antipsychotic use experience in the treatment of MAP”. According to the binary logistic regression analysis of the remaining five independent variables, the question that contributed the least to the model was number of MAP cases followed-up ( p = 0.127). This variable, which was more difficult to question with more than two answer options, was removed from the model. Only twenty participant responded to zuclopenthixol oral use, which is one of the answers to “most common typical oral antipsychotic use”. When this variable was added to the regression model, the Hosmer and Lemeshov test p value remained below 0.05. Therefore, this variable was also removed from the model. As a result, a total of three independent variables were included in the model. All of these variables were two-choice questions and easy to apply. Data from the binary logistic regression model were presented in Table . According to the binary logistic regression analysis, the sensitivity of our model related to the determining the participants who was involved in complete treatment process of at least one MAP case was 81.2, and the specificity was 68.8 percent.
This study examines the practices and attitudes of psychiatrists who continue to work actively in Turkey regarding MAP treatment. Although the participants were initially divided into three groups, the focus of the study was those with partial (group 1) or complete (group 2) MAP treatment experience. These two groups, who participated in the treatment process of at least one MAP case, were compared in terms of sociodemographic data, psychiatric training, institutional, regional characteristics, MAP-related experience, clinical approaches, psychotropic preferences, and significant findings were obtained. The fact that genders were similar between the groups made it easier to interpret the findings. Those whose current institution is a university hospital, city hospital, provincial/district state hospital, and community mental health centre have more partial MAP treatment experience. Complete MAP treatment experience is higher in participants whose current institution is a mental health and disease hospital. The reason for this is most likely the need for closed ward in MAP treatment and the closed wards are almost always located in a mental health and disease hospital in Turkey. The majority of patients diagnosed with MAP admitted to institutions other than mental health and disease hospitals are referred to mental health and disease hospitals before starting treatment and their treatment is usually completed there . Participants involved in the complete MAP treatment process have higher AMATEM experience. This finding is expected since drug-related treatments in Turkey are often carried out in these centres . The fact that MAP cases are mostly followed-up and treated in mental health and disease hospitals also affects the number of complete MAP treatment experience of the psychiatrists working there. Those who have experience with complete MAP treatment are more likely to follow any guideline. Considering that psychiatrists who are partially involved in MAP treatment often refer patients to a closed psychiatric ward, it can be understood why they do not need a guideline. Almost all of those involved in MAP treatment, both partially and completely, think that hospitalization is necessary at any stage of MAP treatment. Participants with complete MAP treatment experience think that hospitalization in MAP should be performed in a closed psychiatric ward. This approach to closed ward admission is understandable for participants with complete MAP treatment experience, who have witnessed all stages of MAP treatment and are more exposed to possible risks. It is seen that those with both partial and complete MAP treatment experience are undecided about involuntary hospitalization and the rates of both groups are similar. It is suggested that the medical, ethical and judicial dimensions of involuntary hospitalization be discussed in depth and that studies be carried out to eliminate the uncertainty on this issue. The patients diagnosed with MAP in the acute exacerbation period will have consequences including suicidal and homicidal behaviours . The delusions of jealousy, reference, persecution, and auditory hallucinations in MAP cases lead to loss of insight and therefore rejection of voluntary admission . In such a case, the choice of involuntary hospitalization should be discussed, taking into account the high benefit of the patient. Intramuscular antipsychotic administration use rates including haloperidol, chlorpromazine, zuclopenthixol decanoate acuphase were higher in participants who experienced complete MAP treatment. Antipsychotics can be administered intramuscularly for rapid and strong effectiveness in MAP accompanied by agitation and aggression . Patients with these characteristics are generally inpatients. Since the rate of working in places with psychiatric ward was higher in the complete MAP treatment group, it can be said that this finding is an expected finding. Quetiapine is most commonly used in the treatment of possible insomnia that occurs in MAP, and those who are involved in the complete treatment use quetiapine more frequently for this purpose. The potential benefits of quetiapine in substance use disorders may be related to its frequent use . Majority of the participants involved in the MAP treatment, both partially and completely, most commonly favour oral risperidone as an antipsychotic, sodium valproate plus valproic acid, carbamazepine as a mood stabilizer in patients with antisocial personality traits, suicidal/homicidal thoughts/behaviours, and self-mutilation. Also, a history of suicidal/homicidal thoughts/behaviours and self-mutilation in MAP encourage the majority of participants to use LAI antipsychotics. A patient diagnosed with MAP whose body mass index is below normal limits, even if he/she has antisocial personality traits, changes the antipsychotic preference of psychiatrists from risperidone to olanzapine. The frequency of encountering extrapyramidal system side effects, life-threatening conditions and delirium was found to be higher among those working in institutions with service. Additionally, it has been observed that the most common extrapyramidal system side effect during MAP follow-up and treatment is dystonia. While olanzapine is the most frequently preferred atypical oral antipsychotic in both groups, risperidone is the second most frequently preferred atypical oral antipsychotic. It is known that antisocial personality traits are common in MAP cases . It was emphasized above that participants in both groups preferred risperidone more frequently in patients diagnosed with MAP with antisocial personality traits. Despite this, it can be argued that olanzapine is more frequently preferred as an atypical oral antipsychotic in the treatment of MAP. One possible explanation may be that risperidone is associated with more extrapyramidal system side effects . Aripiprazole and paliperidone are the most preferred atypical oral antipsychotics after olanzapine and risperidone. The participants who have experience with complete MAP treatment are more likely to use higher doses of olanzapine, risperidone, aripiprazole, and amisulpride in the maintenance treatment of MAP. The fact that participants with complete MAP treatment experience have been involved in the treatment of more patients diagnosed with MAP and have encountered many drug side effects may enable them to make courageous decisions. Non-antipsychotic psychotropic use was higher in participants who participated in the complete MAP treatment. In both groups, the participants who think that antipsychotics should be continued for at least 6–12 months after the psychotic symptoms disappear in the maintenance treatment of the first MAP episode constitute the largest proportion (37.5% and 37.6%). However, when the results are examined in detail, it is seen that the participants do not have a common practice on this issue. It has been determined that the duration of antipsychotic use in the maintenance treatment of MAP varies over a wide range (1 month to 3 years). In both groups, the participants who think that antipsychotics should be continued for at least 3–5 years after the psychotic symptoms disappear in the maintenance treatment of the second MAP episode constitute the largest proportion (35.9% and 33.7%). When the results are examined, it is seen that the participants do not have a common practice in the second episode. Attitude differences have reached an extremely wide range, from 1 month to throughout life. Those who think that antipsychotics should be used throughout life in the third and subsequent MAP episodes are in the majority in both groups. However, disagreements regarding the duration of antipsychotic use in MAP maintenance continue here as well. On the other hand, participants with complete MAP treatment experience think that antipsychotics should be used for a significantly longer time in the second and subsequent MAP episodes. No significant effect of gender was found on the variables examined in this study. As the working duration in psychiatry increases, the doses of antipsychotics used in the maintenance treatment of MAP and the duration of use of antipsychotics become longer. It is thought that this is directly related to the increase in patient experience. Binary logistic regression analysis determined that antipsychotic use characteristics and having encountered possible life-threatening situations were the most effective variables in revealing the experience of partial or complete MAP treatment. Again, according to binary logistic regression analysis, it is possible to determine which group the participant belongs to with a rate of 43.5% with three yes/no questions (experience of LAI antipsychotic use, extrapyramidal system side effect, and delirium). Strengths, limitations and future directions The most important strength of this study is that there is no study with similar features in the literature. Another strength of the current study is that participants representing psychiatrists actively working in Turkey were reached through an internet-based survey. Psychiatrists' practices and attitudes towards the follow-up and treatment processes of MAP are discussed in detail. The effects of psychiatric training and institutional characteristics on approaches are discussed. Just as the psychotic features of MAP cannot yet be clearly explained and positioned according to primary psychotic disorder, psychiatrists' views on the subject are far from a common practice. There are significant differences of opinion on very important topics such as hospitalization, features of oral/intramuscular/LAI antipsychotic use, approaches to possible conditions accompanying MAP, and antipsychotic use characteristics in maintenance treatment. The cross-sectional nature of the study can be considered as a limitation. The validity of the responses to the survey has not been confirmed as it is an internet-based study. This study includes only adult psychiatrists working in Turkey. Considering that drug use characteristics vary regionally, it is not appropriate to generalize the results. The survey was distributed to psychiatrists in Yahoo and WhatsApp groups, which may introduce sampling bias as not all psychiatrists may be part of these groups. It is not known how often and to what extent psychiatrists use applications such as Yahoo and WhatsApp. It is not known which characteristics of physicians use these applications and show interest in online surveys. Participation in the survey was voluntary, leading to potential self-selection bias as psychiatrists who chose to participate may have different perspectives than those who did not. These differences in approach suggest that the DSM-5-TR definition of MAP should be re-evaluated. It is thought that special importance should be given to the MAP section in the next edition of DSM. Undoubtedly, the item of MAP related to duration of psychotic symptoms will be one of the most discussed items. Additionally, the fact that MAP has different characteristics from other drug-associated psychotic disorders may be the subject of the next DSM edition. In this respect, this study will provide a different perspective to studies examining the similarities and differences between primary psychotic disorder and MAP.
The most important strength of this study is that there is no study with similar features in the literature. Another strength of the current study is that participants representing psychiatrists actively working in Turkey were reached through an internet-based survey. Psychiatrists' practices and attitudes towards the follow-up and treatment processes of MAP are discussed in detail. The effects of psychiatric training and institutional characteristics on approaches are discussed. Just as the psychotic features of MAP cannot yet be clearly explained and positioned according to primary psychotic disorder, psychiatrists' views on the subject are far from a common practice. There are significant differences of opinion on very important topics such as hospitalization, features of oral/intramuscular/LAI antipsychotic use, approaches to possible conditions accompanying MAP, and antipsychotic use characteristics in maintenance treatment. The cross-sectional nature of the study can be considered as a limitation. The validity of the responses to the survey has not been confirmed as it is an internet-based study. This study includes only adult psychiatrists working in Turkey. Considering that drug use characteristics vary regionally, it is not appropriate to generalize the results. The survey was distributed to psychiatrists in Yahoo and WhatsApp groups, which may introduce sampling bias as not all psychiatrists may be part of these groups. It is not known how often and to what extent psychiatrists use applications such as Yahoo and WhatsApp. It is not known which characteristics of physicians use these applications and show interest in online surveys. Participation in the survey was voluntary, leading to potential self-selection bias as psychiatrists who chose to participate may have different perspectives than those who did not. These differences in approach suggest that the DSM-5-TR definition of MAP should be re-evaluated. It is thought that special importance should be given to the MAP section in the next edition of DSM. Undoubtedly, the item of MAP related to duration of psychotic symptoms will be one of the most discussed items. Additionally, the fact that MAP has different characteristics from other drug-associated psychotic disorders may be the subject of the next DSM edition. In this respect, this study will provide a different perspective to studies examining the similarities and differences between primary psychotic disorder and MAP.
There are many variables that affect psychiatrists' attitudes and practices regarding MAP treatment. The psychotic nature of MAP and psychiatrists' approaches to this nature appear to vary significantly. The duration of antipsychotic use in the maintenance treatment of MAP is an important matter of debate. The most important result of this study is that psychiatrists make courageous decisions such as more LAI preferences, administering higher doses of anipsychotics, selecting more potent drugs, using more antidepressants, benzodiazepines, mood stabilizers; as their experience participating in all phases of MAP treatment increases. The findings presented support the lack of any standardization in MAP treatment. There is a need for mental health organizations, primarily the Turkish Psychiatry Association, to come together and conduct algorithms and standardization studies on MAP treatment. Considering that methamphetamine and related problems are increasing, it is recommended that all psychiatrists, even if they are not directly involved in MAP treatment, increase their knowledge level about MAP treatment processes through in-service training. It is anticipated that the literature produced through future efforts by mental health organizations will guide government policies. It is essential to integrate standardized data related to MAP diagnosis, treatment and follow-up into continuous medical education. This study, which examines the approaches of psychiatrists to MAP treatment in Turkey, needs to be supported by further studies.
Supplementary Material 1.
|
Improving practice in PD-L1 testing of non-small cell lung cancer in the UK: current problems and potential solutions | 8cbd2449-601b-46bd-9359-cad26c96ea71 | 10850646 | Anatomy[mh] | Advances in first-line and second-line therapy have led to the approval of immune-modulating drugs for patients with non-small cell lung cancer (NSCLC). Programmed cell death ligand 1 (PD-L1) expression offers a predictor of response for many of these medicines, but it is a fragile biomarker and there is a pressing need for greater consistency in its reporting across laboratories.
Assessment of current PD-L1 testing practice in the UK provides new understanding of the variability observed between centres, particularly in the distribution of PD-L1 scoring.
The survey results evidence the need for formal networking of individuals and laboratories to reduce inconsistency in the assessment and reporting of the expression score, the crucial endpoint of PD-L1 testing in NSCLC.
Lung cancer remains the leading cause of cancer-related deaths in the UK in both men and women. This is despite the fact that the majority of these tumours, those classified as non-small cell lung cancer (NSCLC), comprise the group for which an increasing range of targeted therapies has been developed over the past decade. Such therapies include an expanding group of tyrosine kinase inhibitors targeted against tumours with specific genetic aberrations, that is, single genomic drivers, and a group of immune-modulating drugs (IMs) targeted against the programmed cell death protein 1 (PD-1)/programmed cell death ligand 1 (PD-L1) immune checkpoint. The IMs depend for their efficacy on the tumour exploiting the PD-1/PD-L1 checkpoint to protect itself from an immune response, an adaptive mechanism that manifests itself in increased expression of PD-L1 on the surface of tumour cells. A range of IMs is currently approved in the UK for the treatment of NSCLC , differing in terms of their licensed indication, as defined by the European Medicines Agency, and in patient eligibility, as defined by the National Institute for Health and Care Excellence and Scottish Medicines Consortium. Among these eligibility criteria is the level of expression of PD-L1, as detected by immunohistochemistry (IHC). This is generally reported as the tumour proportion score (TPS), the percentage of tumour cells expressing PD-L1 on their surface. Assessing expression of PD-L1 is by far the most commonly used predictor of response of NSCLC to IMs. Unfortunately, it is a fragile biomarker, compromised by its biological heterogeneity, variations in laboratory practice, including reluctance to use ‘cytology’ specimens for its assessment, and challenges in interpretation. Many of these challenges can be addressed only by understanding the nature of and variability in the practice and experience of those involved in PD-L1 testing across a wide range of laboratories and this information is not currently available on the necessary scale. As a precursor to defining a strategy to improve the reliability and consistency of PD-L1 testing, we thought it essential to gather comprehensive data on current practice across the UK.
A questionnaire was devised covering many aspects of PD-L1 testing of NSCLC, and members of the Association of Pulmonary Pathologists (APP; appathologists.com) were contacted by email and invited to participate. The APP has a broad membership, ranging from general pathologists in district general hospitals who have an interest in the area to single-speciality pathologists in academic institutions, many of whom service tertiary thoracic surgical centres. To ensure the capture of as many testing centres as possible, the APP membership list was checked against other contact lists of laboratories and individual pathologists known by the authors to be involved in PD-L1 testing of NSCLC. To avoid duplication, participants were requested to complete the survey only if they were the lead person at their centre responsible for testing. All responses were anonymous to encourage participation and open disclosure. The survey comprised 26 questions. For the majority of these (19 of the 26), respondents selected a response from prespecified options. These covered such areas as the sources, number and nature of specimens tested, by whom this was performed and their involvement in thoracic pathology in general and in PD-L1 testing specifically, at what point in the diagnostic and management pathway testing was performed, the assay(s) used, turnaround times (TATs) between receipt of samples in the laboratory and reporting of results, and training and involvement in external quality assurance (EQA) schemes. In four of these areas, a more nuanced free-text response was requested: reflex testing, expression and reporting of results, approach to repeating a test, and the range of results obtained across three groups as determined by PD-L1 expression scores (‘negative’, ‘low’ and ‘high’) according to the conventional 1% and 50% ‘cut-offs’. The survey remained open between 12 June 2020 and 17 July 2020.
Of the 44 centres approached, a pathologist primarily involved in PD-L1 testing of NSCLC responded from 32 (72.7%); 25 of these respondents (78.1%) responded to questions up to the final question (although some of these respondents did not reply to all of the 26 questions). The responses to questions requiring only selection of a prespecified response are shown online . 10.1136/jcp-2022-208643.supp1 Supplementary data The free-text responses can be summarised as follows: For reflex testing, centres receiving specimens from a variety of sources had no control of how the decision to test was being made, but the details of the process varied widely. Occasional perceptions acting against reflex testing included that many patients are unsuitable for IM therapy anyway on the grounds of performance status, and that securing reimbursement for it might be problematic. The approach to expression and reporting of results showed some variation across centres: 48% expressing them as the TPS and 37% as within a ‘categorical range’ (ie, <1%, 1%–49% or≥50%). One reported them as <1%, 1%–5% and then at 10% intervals ‘as agreed with oncologists’. No centre described the result merely as ‘low/high’ or ‘negative/positive’. The approach to retesting was largely consistent across centres. All would test a second specimen, if available, when a previous specimen had been inadequate (<100 tumour cells). A second specimen was often tested even in the context of a previous successful assessment, either on disease progression or because the result of the initial test had been very close to one of the crucial ‘cut-off’ points. Occasionally, an oncologist would request testing of a further specimen from the same tumour site if the initial test on an adequate specimen had been ‘negative’, but they were ‘running out of options’—the inference being that a second test might yield a higher score. The range of results obtained showed unexpected variation across centres . Within each of the three categories defined by the usual ‘cut-off’ points, and which are often referred to as ‘negative’ (0% to<1%), ‘low’ (1%–49%) and ‘high’ (≥50%), variation was wide at 23%–70%, 10%–60% and 15%–36%, respectively.
Assessment of PD-L1 expression as detected by IHC is currently the only ‘test’ used universally to guide the prescription of IMs to treat patients with NSCLC, and its implementation has not been straightforward. Variation in specimen processing and in the experience of pathologists engaged in its interpretation augment the unavoidable challenges inherent in its biology and weaken its predictive power. The results of our survey highlight well this variability and raise the obvious question of how it might be reduced. In the context of UK practice, we believe our survey to be the most comprehensive yet performed in this area of diagnostics, in terms of coverage of those active in this area and the data collected. A more detailed understanding of why such variability exists is a prerequisite to devising a strategy to reduce it, assuming that variability is detrimental to the desired endpoints. Laboratory practice Variability in laboratory practice (the handling, processing and preparation of specimens preassessment) is almost a tradition in pathology, a legacy of an approach that, until recently, owed more to cookery than to uniform, evidence based, regulated and tightly controlled practice. Such variability was highlighted in a recent review addressing the use of cytology specimens for assessing PD-L1 expression in NSCLC and is important because its ultimate consequence is that specimens prepared by different laboratories might already vary in how PD-L1 expression is manifested before they are interpreted by a pathologist. Such variability has been brought into sharp focus by the increasing requirement for broader predictive ‘biomarker testing’ of NSCLC using IHC, and by studies showing how variation in such techniques can have an impact on treatment choices. This is illustrated, for example, by the results reported by the UK National External Quality Assessment Service (NEQAS) on assessing expression of anaplastic lymphoma kinase (ALK) fusion protein. The ready availability of EQA schemes, across the developed world at least, provides an obvious mechanism for standardising laboratory practice and reducing variability. A comparison can be drawn between the current situation with PD-L1 testing and the serious variability in the technical quality of specimens of breast cancer assessed for human epidermal growth factor receptor 2 (Her2) expression that became apparent in the early 2000s when UK NEQAS established an EQA scheme specifically for this predictive test. A similar scheme for PD-L1 expression in NSCLC is now well established by UK NEQAS and is generating valuable information about interlaboratory variability; in the UK, subscription to such schemes is mandatory for laboratories performing such analyses in order for them to obtain UK Accreditation Service accreditation (standard ISO15189). It is important, however, that this information is acted on and the effect of these improvements re-audited. It is sobering also to realise that, in many countries, subscription to such EQA schemes is not mandatory. Interpretation Identifying the reasons for, and then improving interpretation of, PD-L1 expression by pathologists is more challenging still. The most worrying result of our survey is the wide variability of scoring PD-L1 expression within the three broad groups, ‘negative’ (0%–1%), ‘low’ (1%–49%) and ‘high’ (≥50%). These scores, the ultimate endpoints of PD-L1 testing on which crucial clinical decisions are made, should show relatively limited variation between centres since it is unlikely, in the context of UK patients with NSCLC, that significant variation in the range of PD-L1 expression will occur for reasons of biology or geography. It is well established from clinical trials and other reports that the distribution of PD-L1 TPSs is approximately even across the three categories of ‘negative’, ‘low’ and ‘high’ with, perhaps, a tendency for slightly fewer cases in the middle category, leading towards a bimodal distribution. Broadly speaking, therefore, there is evidence from our survey that some centres may be ‘under-reporting’ the PD-L1 TPS. With the deployment of stage-agnostic reflex testing, which appears to be the dominant approach in this survey of UK centres, there could be a slight bias towards a greater, though still relatively small, proportion of early-stage disease in the test population when compared with data from clinical trials of patients with more advanced disease. Although there is evidence for lower PD-L1 expression in early stage disease, this still would not account for the ‘outliers’ in this survey reporting high proportions of specimens as ‘negative’. Most of the laboratories in our survey used trial-validated companion diagnostic assays, so it is unlikely that the observed variation is due to poor assay sensitivity. Of course, there will always be some variability; interpreting PD-L1 expression is, by its very nature, subjective, but we do not believe that the variability we reveal here is acceptable. Guidelines for which pathologists should and should not interpret PD-L1 expression in NSCLC have emerged over recent years, but are difficult, if not impossible, to enforce. It has been suggested, for example, that interpretation should be restricted to pathologists who see at least 200 diagnostic lung cancer specimens a year, have undergone appropriate formal training (which results in some evidence of competence) and subscribe to an appropriate EQA scheme that is interpretative, not technical. Even among the laboratories covered by our survey, in which at least one pathologist, as a member of the APP, clearly has an interest in thoracic pathology, there are some worrying trends. For example, more than a third of laboratories handle fewer than five PD-L1 tests a week and, in more than 15%, the PD-L1 testing workload is spread between five and eight pathologists . All pathologists involved in PD-L1 scoring are aware of how difficult it can be and of its subjectivity. In the training programmes that are delivered for PD-L1 assessment by means of a TPS, emphasis is put on how to (semi)quantify, if not actually count, the number of tumour cells in the sample and the proportion that are ‘positive’. All levels of staining intensity are relevant and are counted. In a proportion of cases, staining can be weak, requiring examination at high magnification. As pathologists become more familiar with an assay such as PD-L1 scoring, the time required for each assessment will inevitably reduce. Anecdotally, we also hear reports of a more ‘gestalt’ approach to assessment that could conceivably lead to small numbers of positive cells, or cells with light staining, being missed. As many pathologists are currently practising under pressurised conditions with poor staff/workload ratios and pressure to improve TAT, taking such shortcuts is understandable; more than a quarter of respondents in this survey reported average TATs of 5 days or more. In comparison with clinical trials, from which cytology specimens were excluded, it is difficult to know precisely what impact the regular, routine testing of such specimens might have had on our observed outcomes. Most pathologists acknowledge that, in general, PD-L1 scoring of cytology specimens can be challenging and require more time, but there is no conclusive evidence that PD-L1 scores per se are lower in cytology as compared with histology (‘biopsy’) specimens. As discussed above, there is considerable variability in how cytology specimens are processed, and this may well contribute to variability in the results obtained from their assessment. In view of these challenges, there is growing interest, as in other difficult areas of diagnostic pathology, in the use of image analysis, algorithms and machine learning as an aid to interpretation. For example, the validation of such software as an aid to interpretation of PD-L1 expression in NSCLC is a component of the Northern Pathology Imaging Co-operative project, which is currently assessing its utility to a range of pathologists with varying levels of experience across six universities in the North of England. Some variability is inevitable in such complex systems as laboratories in which activity is run and undertaken by individuals who vary in their approach, practice and the variety of skills they possess, and is not surprising. Indeed, a very similar pattern of variability, although in a slightly different context, was revealed by the LungPath study. In this survey, the approach of laboratories and pathologists to subclassifying NSCLCs into squamous and adenocarcinoma was examined, and the findings are largely recapitulated by those we describe here. This is not to say, however, that such variability cannot be reduced. We suggest that a formal network is established of all laboratories engaged in PD-L1 testing of NSCLC with a view to sharing details of practice and data resulting from testing. This would provide a basis for standardising and improving practice and would carry an important educational component. Ultimately, however, encouraging and supporting adoption of best practice might require a more rigorous approach by those institutions, such as the Royal College of Pathologists and Institute of Medical Laboratory Scientists, that are responsible for training, examining and maintaining standards. Part of the approach to remedying the serious inconsistencies in assessing specimens of breast cancer for Her2 expression referred to above consisted of removing the service from ‘failing’ laboratories. This greatly improved quality and consistency and set an important precedent. Adequacy of samples The only objective metric we have for sample adequacy for PD-L1 testing is the presence of at least 100 viable tumour cells in the tissue section being assessed. Intuitively, this makes sense when one is delivering a percentage score on a sample that is already severely challenged by biological heterogeneity and sampling ‘error’ but raises questions about how representative of the patient’s disease burden the rendered score actually is. There is evidence that TPSs reported on samples that have <100 tumour cells are much less predictive of response to IMs than scores derived from samples that are richly cellular. It is comforting that awareness and reporting of this criterion of sufficiency seems to be universal in our survey. Our survey is by no means the first to highlight the problems and challenges with PD-L1 testing in NSCLC, which were clearly apparent, for example, in the global survey conducted by the Pathology Committee of the International Association for the Study of Lung Cancer. However, we wished to concentrate specifically on practice in the UK so that addressing and resolving any problems that might become apparent could be managed efficiently under the auspices of the APP, which is a UK-based association with strong national links. It is gratifying, for example, that the College of American Pathologists is currently in the process of developing guidelines for PD-L1 testing of patients with lung cancer in an attempt to standardise and improve assessment, a strategy that also considers the possible utility of assessing tumour mutational burden as an adjunctive investigation. It is always politically difficult to impose what are often interpreted as restrictions on what individuals might or might not do, even to the point of their being seen as a threat to individuality. In the end, however, the only significant measure of quality of any test we perform, or assessment we make, is arriving at the right answer for the patient, the ultimate user of the service we provide.
Variability in laboratory practice (the handling, processing and preparation of specimens preassessment) is almost a tradition in pathology, a legacy of an approach that, until recently, owed more to cookery than to uniform, evidence based, regulated and tightly controlled practice. Such variability was highlighted in a recent review addressing the use of cytology specimens for assessing PD-L1 expression in NSCLC and is important because its ultimate consequence is that specimens prepared by different laboratories might already vary in how PD-L1 expression is manifested before they are interpreted by a pathologist. Such variability has been brought into sharp focus by the increasing requirement for broader predictive ‘biomarker testing’ of NSCLC using IHC, and by studies showing how variation in such techniques can have an impact on treatment choices. This is illustrated, for example, by the results reported by the UK National External Quality Assessment Service (NEQAS) on assessing expression of anaplastic lymphoma kinase (ALK) fusion protein. The ready availability of EQA schemes, across the developed world at least, provides an obvious mechanism for standardising laboratory practice and reducing variability. A comparison can be drawn between the current situation with PD-L1 testing and the serious variability in the technical quality of specimens of breast cancer assessed for human epidermal growth factor receptor 2 (Her2) expression that became apparent in the early 2000s when UK NEQAS established an EQA scheme specifically for this predictive test. A similar scheme for PD-L1 expression in NSCLC is now well established by UK NEQAS and is generating valuable information about interlaboratory variability; in the UK, subscription to such schemes is mandatory for laboratories performing such analyses in order for them to obtain UK Accreditation Service accreditation (standard ISO15189). It is important, however, that this information is acted on and the effect of these improvements re-audited. It is sobering also to realise that, in many countries, subscription to such EQA schemes is not mandatory.
Identifying the reasons for, and then improving interpretation of, PD-L1 expression by pathologists is more challenging still. The most worrying result of our survey is the wide variability of scoring PD-L1 expression within the three broad groups, ‘negative’ (0%–1%), ‘low’ (1%–49%) and ‘high’ (≥50%). These scores, the ultimate endpoints of PD-L1 testing on which crucial clinical decisions are made, should show relatively limited variation between centres since it is unlikely, in the context of UK patients with NSCLC, that significant variation in the range of PD-L1 expression will occur for reasons of biology or geography. It is well established from clinical trials and other reports that the distribution of PD-L1 TPSs is approximately even across the three categories of ‘negative’, ‘low’ and ‘high’ with, perhaps, a tendency for slightly fewer cases in the middle category, leading towards a bimodal distribution. Broadly speaking, therefore, there is evidence from our survey that some centres may be ‘under-reporting’ the PD-L1 TPS. With the deployment of stage-agnostic reflex testing, which appears to be the dominant approach in this survey of UK centres, there could be a slight bias towards a greater, though still relatively small, proportion of early-stage disease in the test population when compared with data from clinical trials of patients with more advanced disease. Although there is evidence for lower PD-L1 expression in early stage disease, this still would not account for the ‘outliers’ in this survey reporting high proportions of specimens as ‘negative’. Most of the laboratories in our survey used trial-validated companion diagnostic assays, so it is unlikely that the observed variation is due to poor assay sensitivity. Of course, there will always be some variability; interpreting PD-L1 expression is, by its very nature, subjective, but we do not believe that the variability we reveal here is acceptable. Guidelines for which pathologists should and should not interpret PD-L1 expression in NSCLC have emerged over recent years, but are difficult, if not impossible, to enforce. It has been suggested, for example, that interpretation should be restricted to pathologists who see at least 200 diagnostic lung cancer specimens a year, have undergone appropriate formal training (which results in some evidence of competence) and subscribe to an appropriate EQA scheme that is interpretative, not technical. Even among the laboratories covered by our survey, in which at least one pathologist, as a member of the APP, clearly has an interest in thoracic pathology, there are some worrying trends. For example, more than a third of laboratories handle fewer than five PD-L1 tests a week and, in more than 15%, the PD-L1 testing workload is spread between five and eight pathologists . All pathologists involved in PD-L1 scoring are aware of how difficult it can be and of its subjectivity. In the training programmes that are delivered for PD-L1 assessment by means of a TPS, emphasis is put on how to (semi)quantify, if not actually count, the number of tumour cells in the sample and the proportion that are ‘positive’. All levels of staining intensity are relevant and are counted. In a proportion of cases, staining can be weak, requiring examination at high magnification. As pathologists become more familiar with an assay such as PD-L1 scoring, the time required for each assessment will inevitably reduce. Anecdotally, we also hear reports of a more ‘gestalt’ approach to assessment that could conceivably lead to small numbers of positive cells, or cells with light staining, being missed. As many pathologists are currently practising under pressurised conditions with poor staff/workload ratios and pressure to improve TAT, taking such shortcuts is understandable; more than a quarter of respondents in this survey reported average TATs of 5 days or more. In comparison with clinical trials, from which cytology specimens were excluded, it is difficult to know precisely what impact the regular, routine testing of such specimens might have had on our observed outcomes. Most pathologists acknowledge that, in general, PD-L1 scoring of cytology specimens can be challenging and require more time, but there is no conclusive evidence that PD-L1 scores per se are lower in cytology as compared with histology (‘biopsy’) specimens. As discussed above, there is considerable variability in how cytology specimens are processed, and this may well contribute to variability in the results obtained from their assessment. In view of these challenges, there is growing interest, as in other difficult areas of diagnostic pathology, in the use of image analysis, algorithms and machine learning as an aid to interpretation. For example, the validation of such software as an aid to interpretation of PD-L1 expression in NSCLC is a component of the Northern Pathology Imaging Co-operative project, which is currently assessing its utility to a range of pathologists with varying levels of experience across six universities in the North of England. Some variability is inevitable in such complex systems as laboratories in which activity is run and undertaken by individuals who vary in their approach, practice and the variety of skills they possess, and is not surprising. Indeed, a very similar pattern of variability, although in a slightly different context, was revealed by the LungPath study. In this survey, the approach of laboratories and pathologists to subclassifying NSCLCs into squamous and adenocarcinoma was examined, and the findings are largely recapitulated by those we describe here. This is not to say, however, that such variability cannot be reduced. We suggest that a formal network is established of all laboratories engaged in PD-L1 testing of NSCLC with a view to sharing details of practice and data resulting from testing. This would provide a basis for standardising and improving practice and would carry an important educational component. Ultimately, however, encouraging and supporting adoption of best practice might require a more rigorous approach by those institutions, such as the Royal College of Pathologists and Institute of Medical Laboratory Scientists, that are responsible for training, examining and maintaining standards. Part of the approach to remedying the serious inconsistencies in assessing specimens of breast cancer for Her2 expression referred to above consisted of removing the service from ‘failing’ laboratories. This greatly improved quality and consistency and set an important precedent.
The only objective metric we have for sample adequacy for PD-L1 testing is the presence of at least 100 viable tumour cells in the tissue section being assessed. Intuitively, this makes sense when one is delivering a percentage score on a sample that is already severely challenged by biological heterogeneity and sampling ‘error’ but raises questions about how representative of the patient’s disease burden the rendered score actually is. There is evidence that TPSs reported on samples that have <100 tumour cells are much less predictive of response to IMs than scores derived from samples that are richly cellular. It is comforting that awareness and reporting of this criterion of sufficiency seems to be universal in our survey. Our survey is by no means the first to highlight the problems and challenges with PD-L1 testing in NSCLC, which were clearly apparent, for example, in the global survey conducted by the Pathology Committee of the International Association for the Study of Lung Cancer. However, we wished to concentrate specifically on practice in the UK so that addressing and resolving any problems that might become apparent could be managed efficiently under the auspices of the APP, which is a UK-based association with strong national links. It is gratifying, for example, that the College of American Pathologists is currently in the process of developing guidelines for PD-L1 testing of patients with lung cancer in an attempt to standardise and improve assessment, a strategy that also considers the possible utility of assessing tumour mutational burden as an adjunctive investigation. It is always politically difficult to impose what are often interpreted as restrictions on what individuals might or might not do, even to the point of their being seen as a threat to individuality. In the end, however, the only significant measure of quality of any test we perform, or assessment we make, is arriving at the right answer for the patient, the ultimate user of the service we provide.
There is clearly inconsistency in the assessment and reporting of the expression score, the crucial endpoint of PD-L1 testing in NSCLC, that is central to guiding patient management. Addressing this requires formal networking of individuals and laboratories to devise a strategy for reducing this variation.
|
Family-Related Motivation and Regret Intensity Among Family Liver Donors by Type of Family Relationship | 249e598f-b4a5-4cc8-a0d2-35f22951bb28 | 11954405 | Surgery[mh] | The number of liver transplants has steadily risen worldwide, with a 20% increase in 2021 compared to 2015 . Although most liver transplants in Western countries are from deceased donors, over 70% of transplants in most Asian countries are from living donors . Recently, living donor liver transplantation (LDLT) has been increasingly adopted in North America due to better outcomes for recipients . However, there are differences in how LDLT is approached between Western and Asian countries. For example, in the United States, 47% of living liver donors (LLDs) in 2022 were either biologically or legally related to the recipient , whereas in South Korea, 99% of LLDs in 2023 were related to the recipient . Under the Organ Transplant Law in South Korea, a potential LLD must be a voluntary donor aged 16 years or older, with the right to withdraw consent at any point before the transplant surgery . The decision-making process in living donors consists of 2 key phases. Initially, donors make a spontaneous and intuitive decision, considering various factors such as family harmony, moral expectations, and religious beliefs . Subsequently, after tissue compatibility test, they acquire additional information about the surgery and its outcomes, allowing them to reassess their decision analytically, weighing the associated benefits and risks . Additionally, after expressing their intent to donate, potential LLDs undergo counseling sessions with a social worker and a psychiatrist to evaluate their motivations, ensure the absence of coercion, and confirm that thorough family discussions regarding the donation have taken place . Although donation is a voluntary act, a systematic review indicated that nonetheless, a small proportion of donors (0–11.4%) regret their decision to donate . Moreover, the recent and longest follow-up study up to 20 years revealed that more donors (27.5%) experienced feelings of regret . Regret following donation has been linked to poor mental quality of life , depression, and reduced life satisfaction . Therefore, identifying the factors contributing to such regret and strategies to mitigate its intensity remains essential. Previous studies identified several postoperative factors related to post-donation regret, such as time since donation, recipient death , worse surgery recovery , and satisfaction with pain management . In contrast, factors in the decision-making process have received less attention. In the initial phase of decision-making, it is particularly important to explore the family-related motivations (FRM) for donation, apart from personal, religious, or social motivation, as liver donation in South Korea is often expected from compatible family members . Therefore, it is essential to investigate the specific FRM for donation and determine which factors are most important. A qualitative meta-synthesis study suggested that the primary motivation for family organ donation is the desire to save a loved one when no other alternative treatment is available . Shaw and Webb further highlighted that organ donation is often driven by the intention to ensure the well-being of all family members. Additional motivations related to family include fulfilling family expectations, role attribution (reluctance to have another family member undergo donation), and the quality of the donor’s intimacy with the recipient . The decision-justification theory proposes that the perceived intensity of regret is determined by the underlying rationale for the decision . If individuals can considerably believe the decision to be justified in retrospect, they have less regret, even if the decision has a poor consequence. Liu et al also emphasized the importance of effective decision-making that was well-informed or reflected their values because effective decision-making was related to low decision regret. Therefore, it is essential to uncover the intricate FRM of family donors and examine their relationship with regret intensity. However, specific to family donors, little is known about segmentalized motives related to family and how much this influences the decision to donate and regret following donation. If the transplant team can recognize vulnerable individuals during the preoperative stage, they can provide greater attention and offer tailored interventions in advance. Furthermore, donor motivations and psychological outcomes may vary based on the type of family relationship between donor and recipient. For instance, child donors reported lower levels of altruistic and familial motivation compared with parent and spouse donors . Parent donors also faced greater risks of anxiety and depression than sibling donors . While 1 study compared differences in regret between related and non-related donors , no study has yet examined differences in regret based on specific types of family relationships. Therefore, this study aimed to examine whether FRM was associated with post-donation regret and to explore how the type of family relationship moderated this association. The following research hypotheses were proposed: H1: Higher levels of FRM are inversely associated with regret intensity among family LLDs. H2: The type of family relationship moderates the relationship between FRM and regret intensity among family LLDs.
Design, Sample, and Setting This study used a quantitative, cross-sectional design with secondary analysis of an existing dataset of LLDs collected in 2021 from a tertiary university hospital in Seoul, South Korea. The parent study was a descriptive study aimed at identifying predictors of health-related quality of life among LLDs; data were collected through surveys combined with retrospective medical record reviews . The original dataset consisted of 124 LLDs aged ≥19 years, all of whom were recipients’ children, spouses, siblings, or parents, who had undergone hepatectomy for donation more than 1 month prior. After addressing missing data, the moderation model was analyzed using a sample of 121 LLDs. Utilizing G*Power 3.1.9.4, a post hoc power analysis for linear multiple regression was conducted. The analysis was conducted with a significance level of 0.05, a medium effect size of 0.15, and included 8 predictors: FRM, the type of family relationship, the interaction term between FRM and family relationship type, and 5 covariates. Based on a total sample size of 121 LLDs, the analysis indicated a post hoc power of 85.5%. Measures Regret Intensity The following question was used to assess post-donation regret: “If you go back to the time before the organ donation, would you still donate?” The LLDs were asked to choose one of the following 4 response options: ‘very likely (1)’, ‘somewhat likely’ (2), ‘not likely’ (3), or ‘absolutely not’ (4). A higher score indicated greater regret following the donation. This measure was developed based on prior studies . Before the survey, the face validity of the measure was evaluated by 5 transplant professionals and 5 LLDs using a qualitative method. They evaluated it based on clarity, complexity, relevance to the participants, and suitability for its intended purpose. Family-Related Motivation The different FRM for liver donation were retrospectively assessed using a subscale of the Donor Motivation Questionnaire . This questionnaire consists of 5 items that measure family-related motives. Each item was rated on a 5-point Likert-type scale ranging from 0 (disagree) to 4 (agree very strongly); a higher total score indicates a higher level of FRM for donation. The reliability and validity of the Donor Motivation Questionnaire were confirmed . In the current study, the Guttman’s Lamda 2 for a FRM subscale was 0.71, indicating acceptable internal consistency. Type of Family Relationship The type of family relationship between the LLD and recipient was identified. Since 1 of the inclusion criteria for the original sample was that LLDs had to be within the second degree of family relationship with the recipients, donors were either children, spouses, siblings, or parents of recipients. In South Korea, most donors were children (64.4%), followed by spouses (15.3%), siblings (9.8%), parents (4.2%), and other relatives , with these proportions being similar to those in our study sample. Given that child donors represented the largest proportion of family donors, exceeding 60% , we reclassified family donors into 2 categories: child donors and non-child donors. Demographic and Donation-Related Information Information on sex, age, caregiver role, months since donation, recipient death, and postoperative complications were collected. The postoperative complications were categorized according to severity using the Clavien–Dindo classification . Grade I include a typical postoperative course, which is considered a normal process with no pharmaceutical, surgical, endoscopic, or radiological procedures; examples include fluid collection and pleural effusion. Grade II problems include those that need medication, total parenteral nutrition, and blood transfusions; examples include dyspepsia and colitis. Grade III problems necessitate surgical, endoscopic, or radiological intervention; examples include bile duct stenosis and hematoma . Ethical Considerations This study received approval from the Institutional Review Board of Seoul National University Hospital (approval no.: 2304-047-1421). As a secondary analysis, the requirement for obtaining informed consent for this study was waived. De-identified data were used for data analysis. Data Analysis Descriptive statistics were presented as means, standard deviations (SDs), numbers, or percentages of the 2 groups classified according to the type of family relationship: ‘child’ and ‘non-child’ groups. To explore the differences between the groups, Student’s t-tests were used for continuous variables, while Pearson’s chi-square tests or Fisher’s exact tests were applied for categorical variables. The normality of the study variables was tested using the skewness and kurtosis statistics, which were acceptable values. Pearson’s correlation coefficients were used to explore the relationship between dependent, independent, and confounding variables. The PROCESS macro 4.2 was utilized to analyze the simple moderating effect, a bootstrapping method that always allowed dichotomous moderators . Model 1 of the PROCESS macro was employed to examine whether the type of family relationship moderated the FRM and regret intensity association. A 95% confidence interval (CI) was generated to infer each effect with 10 000 bootstrapping samples. The significance of the association is shown by the upper and lower limits of a 95% CI excluding zero. All analyses were performed using SPSS version 25 (IBM Corp.). Because 3 LLDs did not provide answers to any of the items in the Donor Motivation Questionnaire, these 3 LLDs were excluded, and finally, 121 were analyzed in the moderating analysis. An independent variable (FRM) and a dependent variable (regret intensity) were considered continuous variables. The type of family relationship was treated as a dichotomous variable: the child donor of the recipient was coded as 1, whereas the spouse, sibling, or parent donor of the recipient was coded as 0. The control variables, including age, sex, caregiver role, postoperative complications, and months since donation, were also entered into this moderating analysis. We considered that the caregiver role might contribute to regret, as some donors who were also the recipient’s caregivers reported a sense of relief, while others continued to experience stress in providing postoperative care to the recipient after donation . Another significant variable in previous studies, the recipient’s death, was excluded as a control variable because the number of cases was small. Among the control variables, age and months since donation were continuous variables. The dichotomous control variables were coded as follows. In terms of sex, female and male participants were coded as 1 and 0, respectively. For the recipient’s caregiver, the donor was coded as 1 and another family member as 0. Lastly, the postoperative complications were coded as 1 for “grade I–III complications according to the Clavien-Dindo classification” and 0 for “no complications.”
This study used a quantitative, cross-sectional design with secondary analysis of an existing dataset of LLDs collected in 2021 from a tertiary university hospital in Seoul, South Korea. The parent study was a descriptive study aimed at identifying predictors of health-related quality of life among LLDs; data were collected through surveys combined with retrospective medical record reviews . The original dataset consisted of 124 LLDs aged ≥19 years, all of whom were recipients’ children, spouses, siblings, or parents, who had undergone hepatectomy for donation more than 1 month prior. After addressing missing data, the moderation model was analyzed using a sample of 121 LLDs. Utilizing G*Power 3.1.9.4, a post hoc power analysis for linear multiple regression was conducted. The analysis was conducted with a significance level of 0.05, a medium effect size of 0.15, and included 8 predictors: FRM, the type of family relationship, the interaction term between FRM and family relationship type, and 5 covariates. Based on a total sample size of 121 LLDs, the analysis indicated a post hoc power of 85.5%.
Regret Intensity The following question was used to assess post-donation regret: “If you go back to the time before the organ donation, would you still donate?” The LLDs were asked to choose one of the following 4 response options: ‘very likely (1)’, ‘somewhat likely’ (2), ‘not likely’ (3), or ‘absolutely not’ (4). A higher score indicated greater regret following the donation. This measure was developed based on prior studies . Before the survey, the face validity of the measure was evaluated by 5 transplant professionals and 5 LLDs using a qualitative method. They evaluated it based on clarity, complexity, relevance to the participants, and suitability for its intended purpose. Family-Related Motivation The different FRM for liver donation were retrospectively assessed using a subscale of the Donor Motivation Questionnaire . This questionnaire consists of 5 items that measure family-related motives. Each item was rated on a 5-point Likert-type scale ranging from 0 (disagree) to 4 (agree very strongly); a higher total score indicates a higher level of FRM for donation. The reliability and validity of the Donor Motivation Questionnaire were confirmed . In the current study, the Guttman’s Lamda 2 for a FRM subscale was 0.71, indicating acceptable internal consistency. Type of Family Relationship The type of family relationship between the LLD and recipient was identified. Since 1 of the inclusion criteria for the original sample was that LLDs had to be within the second degree of family relationship with the recipients, donors were either children, spouses, siblings, or parents of recipients. In South Korea, most donors were children (64.4%), followed by spouses (15.3%), siblings (9.8%), parents (4.2%), and other relatives , with these proportions being similar to those in our study sample. Given that child donors represented the largest proportion of family donors, exceeding 60% , we reclassified family donors into 2 categories: child donors and non-child donors. Demographic and Donation-Related Information Information on sex, age, caregiver role, months since donation, recipient death, and postoperative complications were collected. The postoperative complications were categorized according to severity using the Clavien–Dindo classification . Grade I include a typical postoperative course, which is considered a normal process with no pharmaceutical, surgical, endoscopic, or radiological procedures; examples include fluid collection and pleural effusion. Grade II problems include those that need medication, total parenteral nutrition, and blood transfusions; examples include dyspepsia and colitis. Grade III problems necessitate surgical, endoscopic, or radiological intervention; examples include bile duct stenosis and hematoma .
The following question was used to assess post-donation regret: “If you go back to the time before the organ donation, would you still donate?” The LLDs were asked to choose one of the following 4 response options: ‘very likely (1)’, ‘somewhat likely’ (2), ‘not likely’ (3), or ‘absolutely not’ (4). A higher score indicated greater regret following the donation. This measure was developed based on prior studies . Before the survey, the face validity of the measure was evaluated by 5 transplant professionals and 5 LLDs using a qualitative method. They evaluated it based on clarity, complexity, relevance to the participants, and suitability for its intended purpose.
The different FRM for liver donation were retrospectively assessed using a subscale of the Donor Motivation Questionnaire . This questionnaire consists of 5 items that measure family-related motives. Each item was rated on a 5-point Likert-type scale ranging from 0 (disagree) to 4 (agree very strongly); a higher total score indicates a higher level of FRM for donation. The reliability and validity of the Donor Motivation Questionnaire were confirmed . In the current study, the Guttman’s Lamda 2 for a FRM subscale was 0.71, indicating acceptable internal consistency.
The type of family relationship between the LLD and recipient was identified. Since 1 of the inclusion criteria for the original sample was that LLDs had to be within the second degree of family relationship with the recipients, donors were either children, spouses, siblings, or parents of recipients. In South Korea, most donors were children (64.4%), followed by spouses (15.3%), siblings (9.8%), parents (4.2%), and other relatives , with these proportions being similar to those in our study sample. Given that child donors represented the largest proportion of family donors, exceeding 60% , we reclassified family donors into 2 categories: child donors and non-child donors.
Information on sex, age, caregiver role, months since donation, recipient death, and postoperative complications were collected. The postoperative complications were categorized according to severity using the Clavien–Dindo classification . Grade I include a typical postoperative course, which is considered a normal process with no pharmaceutical, surgical, endoscopic, or radiological procedures; examples include fluid collection and pleural effusion. Grade II problems include those that need medication, total parenteral nutrition, and blood transfusions; examples include dyspepsia and colitis. Grade III problems necessitate surgical, endoscopic, or radiological intervention; examples include bile duct stenosis and hematoma .
This study received approval from the Institutional Review Board of Seoul National University Hospital (approval no.: 2304-047-1421). As a secondary analysis, the requirement for obtaining informed consent for this study was waived. De-identified data were used for data analysis.
Descriptive statistics were presented as means, standard deviations (SDs), numbers, or percentages of the 2 groups classified according to the type of family relationship: ‘child’ and ‘non-child’ groups. To explore the differences between the groups, Student’s t-tests were used for continuous variables, while Pearson’s chi-square tests or Fisher’s exact tests were applied for categorical variables. The normality of the study variables was tested using the skewness and kurtosis statistics, which were acceptable values. Pearson’s correlation coefficients were used to explore the relationship between dependent, independent, and confounding variables. The PROCESS macro 4.2 was utilized to analyze the simple moderating effect, a bootstrapping method that always allowed dichotomous moderators . Model 1 of the PROCESS macro was employed to examine whether the type of family relationship moderated the FRM and regret intensity association. A 95% confidence interval (CI) was generated to infer each effect with 10 000 bootstrapping samples. The significance of the association is shown by the upper and lower limits of a 95% CI excluding zero. All analyses were performed using SPSS version 25 (IBM Corp.). Because 3 LLDs did not provide answers to any of the items in the Donor Motivation Questionnaire, these 3 LLDs were excluded, and finally, 121 were analyzed in the moderating analysis. An independent variable (FRM) and a dependent variable (regret intensity) were considered continuous variables. The type of family relationship was treated as a dichotomous variable: the child donor of the recipient was coded as 1, whereas the spouse, sibling, or parent donor of the recipient was coded as 0. The control variables, including age, sex, caregiver role, postoperative complications, and months since donation, were also entered into this moderating analysis. We considered that the caregiver role might contribute to regret, as some donors who were also the recipient’s caregivers reported a sense of relief, while others continued to experience stress in providing postoperative care to the recipient after donation . Another significant variable in previous studies, the recipient’s death, was excluded as a control variable because the number of cases was small. Among the control variables, age and months since donation were continuous variables. The dichotomous control variables were coded as follows. In terms of sex, female and male participants were coded as 1 and 0, respectively. For the recipient’s caregiver, the donor was coded as 1 and another family member as 0. Lastly, the postoperative complications were coded as 1 for “grade I–III complications according to the Clavien-Dindo classification” and 0 for “no complications.”
Sample Description The demographic and donation-related characteristics of the participants are presented in . The sample consisted of 124 LLDs, of whom 88 were adult children of the recipient, and 36 were spouses, siblings, or parents of the recipient. Among the non-child donors, there were 17 spouses (14 wives and 3 husbands), 11 siblings (5 sisters and 6 brothers), and 8 parents (3 mothers and 5 fathers). The proportion of male donors was significantly higher in the child group compared to the non-child group (p=0.012), with participants in the child and non-child groups predominantly consisting of men (63.6%) and women (61.1%), respectively. The mean age of the total LLDs was 37.9 years (SD=11.4). Adult child donors were younger than the other relative donors (p<0.001). Approximately one-third of the donors (30.6%) were caregivers of the recipients, and this proportion differed significantly between the 2 groups (p<0.001). Out of 124 recipients, 11 died after transplantation. The average number of months since donation was 30.4 (SD=26.1) months. Approximately 8.1% of the donors developed grade I–III complications postoperatively. With regard to the FRM, the mean score was 11.0 (SD = 4.0). The mean score of regret intensity was 1.3 out of 4 (SD=0.5). The mean scores of regret were significantly different between the 2 groups (p=0.047). Family-Related Motivations shows 5 items of FRM of family LLDs according to the type of family relationship. The most common motive was “because I felt family affection for the recipient” (87.9%), followed by “because it was desirable for the well-being of the whole family” (86.3%). The least common motive was “because of family expectations to help the recipient” (11.3%). The motive with the greatest difference between the child and non-child groups was “because I didn’t want another family member to suffer from organ donation.” Correlations Among the Study Variables Pearson correlations among the study variables are shown in . FRM was correlated with sex (p<0.05) and age (p<0.05). The type of family relationship was negatively correlated with sex (p < 0.05), age (p<0.001), and caregiver role (p<0.001). Regret intensity was negatively associated with FRM (p<0.05) and the type of family relationship (p<0.05). Moderating Effect of Family Relationship Type on the Association Between FRM and Regret Intensity The bootstrap analysis revealed that the type of family relationship significantly moderated the association between FRM and regret intensity ( ). After adjusting for age, sex, caregiver role, postoperative complications, and months since donation, FRM was negatively associated with regret intensity (Effect=−0.074, 95% Boot CI [−0.148, −0.017]). The type of family relationship was also negatively associated with regret intensity (Effect=−0.944, 95% Boot CI [−1.970, −0.193]), indicating that non-child donors were more likely to experience regret compared to child donors. The interaction term (i.e., FRM×type of family relationship) was positively associated with regret intensity (Effect=0.062, 95% Boot CI [0.003, 0.141]), confirming the moderating role of family relationship type. shows the conditional effects of FRM on regret intensity. Among child donors, FRM was not significantly associated with regret intensity (Effect=−0.012, 95% Boot CI [−0.039, 0.015]); however, among spouse, sibling, and parent donors, FRM was significantly and inversely associated with regret intensity (Effect=−0.074, 95% Boot CI [−0.124, −0.024]).
The demographic and donation-related characteristics of the participants are presented in . The sample consisted of 124 LLDs, of whom 88 were adult children of the recipient, and 36 were spouses, siblings, or parents of the recipient. Among the non-child donors, there were 17 spouses (14 wives and 3 husbands), 11 siblings (5 sisters and 6 brothers), and 8 parents (3 mothers and 5 fathers). The proportion of male donors was significantly higher in the child group compared to the non-child group (p=0.012), with participants in the child and non-child groups predominantly consisting of men (63.6%) and women (61.1%), respectively. The mean age of the total LLDs was 37.9 years (SD=11.4). Adult child donors were younger than the other relative donors (p<0.001). Approximately one-third of the donors (30.6%) were caregivers of the recipients, and this proportion differed significantly between the 2 groups (p<0.001). Out of 124 recipients, 11 died after transplantation. The average number of months since donation was 30.4 (SD=26.1) months. Approximately 8.1% of the donors developed grade I–III complications postoperatively. With regard to the FRM, the mean score was 11.0 (SD = 4.0). The mean score of regret intensity was 1.3 out of 4 (SD=0.5). The mean scores of regret were significantly different between the 2 groups (p=0.047).
shows 5 items of FRM of family LLDs according to the type of family relationship. The most common motive was “because I felt family affection for the recipient” (87.9%), followed by “because it was desirable for the well-being of the whole family” (86.3%). The least common motive was “because of family expectations to help the recipient” (11.3%). The motive with the greatest difference between the child and non-child groups was “because I didn’t want another family member to suffer from organ donation.”
Pearson correlations among the study variables are shown in . FRM was correlated with sex (p<0.05) and age (p<0.05). The type of family relationship was negatively correlated with sex (p < 0.05), age (p<0.001), and caregiver role (p<0.001). Regret intensity was negatively associated with FRM (p<0.05) and the type of family relationship (p<0.05).
The bootstrap analysis revealed that the type of family relationship significantly moderated the association between FRM and regret intensity ( ). After adjusting for age, sex, caregiver role, postoperative complications, and months since donation, FRM was negatively associated with regret intensity (Effect=−0.074, 95% Boot CI [−0.148, −0.017]). The type of family relationship was also negatively associated with regret intensity (Effect=−0.944, 95% Boot CI [−1.970, −0.193]), indicating that non-child donors were more likely to experience regret compared to child donors. The interaction term (i.e., FRM×type of family relationship) was positively associated with regret intensity (Effect=0.062, 95% Boot CI [0.003, 0.141]), confirming the moderating role of family relationship type. shows the conditional effects of FRM on regret intensity. Among child donors, FRM was not significantly associated with regret intensity (Effect=−0.012, 95% Boot CI [−0.039, 0.015]); however, among spouse, sibling, and parent donors, FRM was significantly and inversely associated with regret intensity (Effect=−0.074, 95% Boot CI [−0.124, −0.024]).
This study found that low FRM is associated with increased regret intensity among family LLDs. This study also identified the moderating role of the type of family relationship in the association between FRM and regret intensity. Among spouse, sibling, and parent donors, those with lower FRM had significant tendencies to have greater intensity of regret. However, this trend was not significant in child donors. In this study, most family LLDs decided to donate out of love for the recipient and a desire for the well-being of the entire family, which is consistent with previous studies that conducted thematic analyses of qualitative research on living kidney donors . Furthermore, most participants indicated that their decisions were not influenced by family expectations, suggesting that most family LLDs are likely free from coercion. In addition to this, when compared to child donors, non-child donors were more inclined to donate to prevent another family member from suffering due to liver donation. Spouse, sibling, and parent donors expressed greater concern for other family members who might eventually need to donate for various reasons if they did not take action themselves. This concern may explain the higher proportion of women among the non-child donors. Rota-Musoll et al noted that the prevalence of wives among spouse donors can be attributed to their tendency to alleviate caregiver burden for their husbands (the recipients) and prevent their children from becoming donors. In the case of parent donors, a previous study has found that mothers donate more frequently than fathers due to factors such as the father being the sole breadwinner or the mother having a closer relationship with the children (the recipients) . However, our study revealed different results regarding parent donors. Although the parent sample size was too small for a definitive comparison, the limited number of mothers may be attributed to the increasing socioeconomic roles of women in Korean society. Additionally, fathers may have taken on the role of donors as mothers assumed responsibility for postoperative care for both the donor (husband) and the recipient (child). Future research should aim to compare not only by family relationship but also by gender, using a larger sample size. Meanwhile, in the child donor group, there were more sons than daughters. The high proportion of sons as child donors may reflect the social climate and expectations regarding women’s roles in East Asia. Young, single women are often implicitly excluded from consideration due to traditional associations with future pregnancy and the desire to remain physically unscarred . Married daughters also tend to be excluded as their newer family roles are perceived as more important . The level of FRM showed a negative association with the intensity of post-donation regret. This finding that the pre-donation factor was related to regret after donation is in line with a prospective study identifying that a greater sense of responsibility toward the recipient and greater expectations regarding benefits were linked to higher post-donation regret . Therefore, psychosocial assessment of donor candidates should be sensitive to the underlying motivations of individuals with low FRM. There is a need to develop more tailored guidance for exploring motives and assessing decision-making . We suggest motivational intervention for prospective donors, a prevented intervention that helps the donors express positive and negative motivations and recognize how the donation is connected to their important goals and values, finally resolving ambivalence and preventing psychosocial difficulties after donating . The relationship between FRM and regret intensity was found to be moderated by the type of family relationship, even after controlling for covariates. So far, family donors have been viewed as a homogeneous group compared with unrelated donors. However, this study revealed that non-child donors with lower FRM are the group most vulnerable to regret. We may infer that this is because key motivations, including expectations and rationales for donation, differ depending on the type of family relationship . Spousal donors tend to expect a transplant to enable the recipient to actively participate in family life and events while alleviating the caregiving burden on themselves . Sibling donors, on the other hand, tend to donate to gain recognition and attention, avoid family tension, or fulfill a sense of obligation, often accompanied by ambivalence [ , , ]. Parent donors are primarily driven by an unconditional willingness to save their child, prioritizing the child’s health above all else, often as a way to mitigate feelings of guilt over their child’s illness . Non-child donors appear to donate for certain personal benefit as well as FRM, and unmet expectation in these areas tend to increase levels of regret. In contrast, FRM was not significantly associated with regret intensity in child donors, whose regret levels were lower than those of non-child donors. A unique motive for child donors was a sense of obligation to repay their perceived indebtedness to the family . This suggests that their intention to gain personal benefits from donations is weak. They tend to view donation either as a gift given without expecting anything in return or as repayment for past support, perceiving it as an inherent duty regardless of the consequences . Therefore, transplant teams should counsel prospective donors to thoroughly assess their core motivations and determine whether they expect personal benefits from donations. In South Korea, pre-donation counseling primarily focuses on verifying that the donor is a relative or long-time friend of the recipient and ensuring that financial motives are absent . However, beyond superficial or formal confirmation, counseling should explore the true nature of FRM, the donor’s dominant motivation, and any personal expectations, including potential social or psychological benefits. Also, healthcare providers must educate family donors about possible outcomes of their expectations. Such a process will help donors clearly understand their own motivations and enable them to make genuinely autonomous decisions. Additionally, in this study, non-child donors often assumed multiple roles as family members, caregivers, and donors, which could lead to psychosocial vulnerability . Despite these challenges, they tend to remain silent about their feelings, endure difficulties alone, and often experiencing loneliness as a result . Therefore, it is crucial to provide family donors, particularly non-child donors, with adequate emotional support and information before donation . Establishing support groups for donors with similar family relationships to recipients could also be beneficial. Within these groups, donors could share their FRM, expectations, and post-donation experiences. Such group therapy might help reduce overly optimistic expectations and mitigate feelings of regret. This study has some limitations. First, since regret intensity was measured with only a single question, it is recommended that future research use multi-item instruments. Moreover, FRM was retrospectively assessed after donation, which can cause recall bias. Therefore, prospective studies are needed to provide more reliable data. Additionally, since the data used in this study were collected from a single hospital, the generalizability of the results may be limited. Future research should investigate these associations in other geographic populations and ensure a sufficiently larger sample size.
This study examined preoperative factors influencing the intensity of regret based on second-degree family relationships. Non-child donors with low FRM were found to have a higher risk of experiencing intense regret compared to child donors. These findings highlight the importance of providing tailored support to donors, taking into account both the type of family relationship and their FRM levels. Transplant teams should counsel non-child donors who exhibit low FRM regarding their underlying motivations and expectations, providing targeted emotional support and psychological interventions to mitigate post-donation regret.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
Long-term watermelon continuous cropping leads to drastic shifts in soil bacterial and fungal community composition across gravel mulch fields | a595d625-d359-4ccd-8dde-b741dd4c607a | 9344729 | Microbiology[mh] | Gravel mulch technology is one of the most crucial methods for surface coverage in dryland regions, which can reduce evaporation and runoff in dry farming areas and thus has become an essential agricultural management measure to promote water-use efficiency in dryland regions [ – ]. Owing to limiting factors including economic interests, tillage and climatic conditions, continuous cropping of watermelon has been commonly conducted in gravel mulch fields for a long time . However, long-term continuous cropping of watermelon may induce soil mineral deficiencies, increase disease incidences . Afterwards, these abiotic and biotic changes can lead to a decrease in watermelon fruit yield and quality . Soil microorganisms play crucial roles in mediating key ecosystem processes and function [ – ]. Hence, the shifts in soil microbial composition can be considered as a sensitive biological indicator for soil health . Previous studies have demonstrated that continuous cropping will significantly alter soil microbial community structure [ – ]. However, due to the difference in cropping systems, planting years and research methodology, no consensus has been reached on the effects of long-term continuous cropping on soil microbial communities . The present study on the effect of watermelon continuous cropping on soil microbial communities in gravel mulch fields is essential for maintaining watermelon fruit yield and quality. A great number of previous studies have revealed that continuous cropping may induce substantial shifts in soil physicochemical conditions, such as nutrient availability and enzymatic activities, thereby can significantly alter the diversity, abundance and composition of soil microorganisms [ , , , ]. As the two most major taxa of soil microorganisms, fungi and bacteria have different dispersal abilities, metabolic activities, and environmental preferences [ – ]. Importantly, soil bacteria and fungi need to compete for similar resources ; fungi have a stronger capacity to decompose complex molecules than bacteria . This may lead to the different responses of soil bacterial and fungal communities to the same environmental drivers . Previous studies have reported that different soil physicochemical factors determine soil bacterial and fungal compositions . For example, the community structure of soil bacterial communities is shaped by soil pH , but that of soil fungal communities is strongly influenced by soil carbon content in the black soil zone of northeast China . Therefore, soil bacteria and fungi may have different responses to the variation in each soil physicochemical factor caused by long-term continuous cropping. Given that, long-term continuous cropping may have different effects on soil bacteria and fungi. For example, previous studies using the plate culture method have found that soybean continuous cropping would increase the abundance of soil fungi but decrease that of soil bacteria . In past decades, the effect of continuous cropping on soil microbial communities have been well explored in barley system , soybean system , cucumber system , peanut system , and cotton system . However, little is still known about how long-term continuous cropping of watermelon interacts with soil physicochemical factors to influence soil bacteria and fungi in gravel mulch fields. A major objective of our study was to test how long-term continuous cropping and soil physicochemical factors jointly alter soil bacterial and fungal composition, and explore the links between variation in soil microbial composition and watermelon yield. In this analysis, we assessed the soil physicochemical properties, as well as bacterial and fungal compositions under different continuous years (CK, 1, 6, 11, 16, and 21 years) in the watermelon ( Citrullus lanatus ) systems of a gravel mulch field in the Loess Plateau of China. Hence, we attempted to address the following specific questions: (1) Do soil bacterial and fungal composition significantly differ among different continuous years? (2) How do continuous cropping and physicochemical factors jointly drive the variations in soil bacterial and fungal compositions? (3) Do the variation in soil bacterial and fungal composition have significant relationships with watermelon yield?
Site description and sampling This study was conducted on a gravel mulch cropland in Zhongwei City of Ningxia Hui Autonomous Region (36° 57′ N, 105° 18′ E). As a typical dryland ecosystem, the annual mean precipitation, temperature, and annual mean evaporation of the study region are 247.4 mm, 7.1 ℃, and 2100–3200 mm, respectively. The zonal soil type is mostly ash-calcium, and the zonal vegetation is desert grassland. Watermelon (Jincheng V) continuous cropping for 1, 6, 11, 16, and 21 years (denoted as 1a, 2a, 11a, 16a, and 21a, respectively) was selected in this study. These cropped systems were managed under same-level nutrients input and field management activities. Additionally, a non-cropped control treatment (CK) was also selected. In total, 18 bulk soil samples (six treatments × three replicates) were collected at the flowering stage of watermelon in 2019. In each treatment, 10 soil cores (20 cm depth) were randomly collected within an area of approximately 100 m 2 and then mixed thoroughly to form a composite sample (a replicate). The composite soil sample was sieved by a 2 mm mesh and then subdivided into two parts: one portion was stored in thermal insulated boxes (at 4 °C) for determining soil physicochemical properties, and the other portion was stored at − 20 °C for DNA extraction. Soil physicochemical property The contents of soil organic matter (SOM) and total nitrogen (STN) were assessed by K 2 Cr 2 O 7 oxidation method and Kjeldahl procedure , respectively. Soil available nitrogen (SAN) and total phosphorus (STP) contents were determined by alkali diffusion method and molybdenum blue method , respectively. Soil available potassium (SAK) extracted with 1 mol/ L ammonium acetate (NH4OAc) was measured be inductively coupled plasma-atomic emission spectrometry . Soil pH was determined using a pH meter with a 1:2.5 ratio of fresh soil to deionized water. Soil moisture content (SM) was measured gravimetrically a er drying soil in an oven at 105 °C for 48 h. Soil available phosphorus (SAP) by the Olsen’s method . Soil water-soluble salinity content (SSC), was determined by using an electric conductometer . Molecular analyses Genomic DNA was extracted from 0.5 g fresh soil samples using E.Z.N.A. Soil DNA Kits (OMEGA, United States) following the manufacturer’s instructions. The V3–V4 hypervariable region of the bacterial 16 S rRNA gene was amplified using primers 338 F (5′-ACTCCTACGGGAGGCAGCAG-3′) and 806R (5′-GGACTACNNGGG TATCTAAT-3′) . Universal primers ITS1F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS2R (5′-TGCGTTCTTCATCGATGC-3′) were used to amplify fungal internal transcribed spacer (ITS) region. These primers contained a set of 8-nucleotide barcode sequences unique to each sample. The PCR program was as follows: 95 °C for 5 min, 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s with a final extension of 72 °C for 10 min. PCR reactions were performed in triplicate 25 µL mixture containing 2.5 µL of 10 × Pyrobest Buffer, 2 µL of 2.5 mM dNTPs, 1 µL of each primer (10 µM), 0.4 U of Pyrobest DNA Polymerase (TaKaRa), and 15 ng of template DNA. Amplicons were extracted from 2% agarose gels and purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, U.S.) according to the manufacturer’s instructions and quantified using QuantiFluor™ -ST (Promega, U.S.). Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 × 300) on an Illumina MiSeq platform according to the standard protocols. Fungal and bacterial sequences > 200 bp with an average quality score > 20 and without ambiguous base calls were processed using QIIME packages . These high-quality sequences were clustered into operational taxonomic units (OTUs) based on a 97% similarity threshold using UPARSE . Fungal and bacterial taxonomies were assessed against Silva v128 and UNITE v8.0, respectively . A randomly selected subset of 15,777 and 38,564 bacterial and fungal sequences per sample were used in the subsequent analysis to reduce the effects of different sequencing depths on the analyses. The soil bacterial and fungal raw sequence data used in this study have been submitted in the NCBI Sequence Read Archive under BioProject PRJNA775053. Data analysis Eight soil variables (SOM, STN, STP, SAP, SAN, SAK, SM, SSC, and pH) were used in our analysis. All explanatory variables were standardized to interpret parameter estimates on a comparable scale. Principal component analysis (PCA) was conducted within “vegan” package to reduce soil nutrient data (e.g., SOM, STN, STP, SAP, and SA, SAK) redundancy. The first two soil principal components (SPCs; i.e., SPC1 and SPC2) jointly explained more than 95% of the total variation, and thus were used in the following analysis (Table S ). One-way ANOVA with Tukey’s test was carried out to test the influence of continuous cropping on soil physicochemical conditions. To test the differences in bacterial and fungal taxonomic composition across different continuous years, this study chose the most 8 dominant bacterial genera and 13 dominant fungal genera based on the taxonomic abundance data (average relative abundance > 10% across all samples). One-Way ANOVA was then conducted to assess the significance of group differences among different treatments. Pairwise Bray–Curtis distance for bacterial and fungal communities and standardized environmental Euclidean distance were calculated within “vegan” package. Permutational analysis of variance (PERMANOVA) was carried out to test the influence of continuous cropping on bacterial and fungal community compositions. Principal coordinate analysis (PCoA) was used to exhibit the variations in bacterial and fungal compositions across different periods. Both PERMANOVA and PCoA were conducted within “vegan” package in R. Mantel tests (10,000 permutations) were conducted in this study to examine the relationships between soil variables and soil microbial composition. St, structural equation models (SEMs) were constructed to explore the direct and indirect influence of continuous cropping and soil condition on the variation in soil fungal and bacterial composition. Here, direct influence means that given variables can directly alter the community composition of bacteria and fungi, while indirect influence indicates that given variables indirectly alter soil bacterial and fungal compositions via affecting other variables. χ 2 test, comparative fit index (CFI), goodness of fit index (GFI), and root square mean error of approximation (RMSEA) were used to test whether the models were fitted . Standardized direct and indirect effects were added to evaluate the standardized total effects of each variable. SEM was performed within “lavaan” package.
This study was conducted on a gravel mulch cropland in Zhongwei City of Ningxia Hui Autonomous Region (36° 57′ N, 105° 18′ E). As a typical dryland ecosystem, the annual mean precipitation, temperature, and annual mean evaporation of the study region are 247.4 mm, 7.1 ℃, and 2100–3200 mm, respectively. The zonal soil type is mostly ash-calcium, and the zonal vegetation is desert grassland. Watermelon (Jincheng V) continuous cropping for 1, 6, 11, 16, and 21 years (denoted as 1a, 2a, 11a, 16a, and 21a, respectively) was selected in this study. These cropped systems were managed under same-level nutrients input and field management activities. Additionally, a non-cropped control treatment (CK) was also selected. In total, 18 bulk soil samples (six treatments × three replicates) were collected at the flowering stage of watermelon in 2019. In each treatment, 10 soil cores (20 cm depth) were randomly collected within an area of approximately 100 m 2 and then mixed thoroughly to form a composite sample (a replicate). The composite soil sample was sieved by a 2 mm mesh and then subdivided into two parts: one portion was stored in thermal insulated boxes (at 4 °C) for determining soil physicochemical properties, and the other portion was stored at − 20 °C for DNA extraction.
The contents of soil organic matter (SOM) and total nitrogen (STN) were assessed by K 2 Cr 2 O 7 oxidation method and Kjeldahl procedure , respectively. Soil available nitrogen (SAN) and total phosphorus (STP) contents were determined by alkali diffusion method and molybdenum blue method , respectively. Soil available potassium (SAK) extracted with 1 mol/ L ammonium acetate (NH4OAc) was measured be inductively coupled plasma-atomic emission spectrometry . Soil pH was determined using a pH meter with a 1:2.5 ratio of fresh soil to deionized water. Soil moisture content (SM) was measured gravimetrically a er drying soil in an oven at 105 °C for 48 h. Soil available phosphorus (SAP) by the Olsen’s method . Soil water-soluble salinity content (SSC), was determined by using an electric conductometer .
Genomic DNA was extracted from 0.5 g fresh soil samples using E.Z.N.A. Soil DNA Kits (OMEGA, United States) following the manufacturer’s instructions. The V3–V4 hypervariable region of the bacterial 16 S rRNA gene was amplified using primers 338 F (5′-ACTCCTACGGGAGGCAGCAG-3′) and 806R (5′-GGACTACNNGGG TATCTAAT-3′) . Universal primers ITS1F (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and ITS2R (5′-TGCGTTCTTCATCGATGC-3′) were used to amplify fungal internal transcribed spacer (ITS) region. These primers contained a set of 8-nucleotide barcode sequences unique to each sample. The PCR program was as follows: 95 °C for 5 min, 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s with a final extension of 72 °C for 10 min. PCR reactions were performed in triplicate 25 µL mixture containing 2.5 µL of 10 × Pyrobest Buffer, 2 µL of 2.5 mM dNTPs, 1 µL of each primer (10 µM), 0.4 U of Pyrobest DNA Polymerase (TaKaRa), and 15 ng of template DNA. Amplicons were extracted from 2% agarose gels and purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, U.S.) according to the manufacturer’s instructions and quantified using QuantiFluor™ -ST (Promega, U.S.). Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 × 300) on an Illumina MiSeq platform according to the standard protocols. Fungal and bacterial sequences > 200 bp with an average quality score > 20 and without ambiguous base calls were processed using QIIME packages . These high-quality sequences were clustered into operational taxonomic units (OTUs) based on a 97% similarity threshold using UPARSE . Fungal and bacterial taxonomies were assessed against Silva v128 and UNITE v8.0, respectively . A randomly selected subset of 15,777 and 38,564 bacterial and fungal sequences per sample were used in the subsequent analysis to reduce the effects of different sequencing depths on the analyses. The soil bacterial and fungal raw sequence data used in this study have been submitted in the NCBI Sequence Read Archive under BioProject PRJNA775053.
Eight soil variables (SOM, STN, STP, SAP, SAN, SAK, SM, SSC, and pH) were used in our analysis. All explanatory variables were standardized to interpret parameter estimates on a comparable scale. Principal component analysis (PCA) was conducted within “vegan” package to reduce soil nutrient data (e.g., SOM, STN, STP, SAP, and SA, SAK) redundancy. The first two soil principal components (SPCs; i.e., SPC1 and SPC2) jointly explained more than 95% of the total variation, and thus were used in the following analysis (Table S ). One-way ANOVA with Tukey’s test was carried out to test the influence of continuous cropping on soil physicochemical conditions. To test the differences in bacterial and fungal taxonomic composition across different continuous years, this study chose the most 8 dominant bacterial genera and 13 dominant fungal genera based on the taxonomic abundance data (average relative abundance > 10% across all samples). One-Way ANOVA was then conducted to assess the significance of group differences among different treatments. Pairwise Bray–Curtis distance for bacterial and fungal communities and standardized environmental Euclidean distance were calculated within “vegan” package. Permutational analysis of variance (PERMANOVA) was carried out to test the influence of continuous cropping on bacterial and fungal community compositions. Principal coordinate analysis (PCoA) was used to exhibit the variations in bacterial and fungal compositions across different periods. Both PERMANOVA and PCoA were conducted within “vegan” package in R. Mantel tests (10,000 permutations) were conducted in this study to examine the relationships between soil variables and soil microbial composition. St, structural equation models (SEMs) were constructed to explore the direct and indirect influence of continuous cropping and soil condition on the variation in soil fungal and bacterial composition. Here, direct influence means that given variables can directly alter the community composition of bacteria and fungi, while indirect influence indicates that given variables indirectly alter soil bacterial and fungal compositions via affecting other variables. χ 2 test, comparative fit index (CFI), goodness of fit index (GFI), and root square mean error of approximation (RMSEA) were used to test whether the models were fitted . Standardized direct and indirect effects were added to evaluate the standardized total effects of each variable. SEM was performed within “lavaan” package.
Effects of continuous cropping on soil physicochemical properties and watermelon yield The ANOVA results showed that all soil physicochemical variables significantly varied among the different continuous cropping years ( P < 0.001, Table ). Long-time continuous cropping remarkably decreased the STN, STP, SAN, SAK and SAP and remarkably increased SM. SOM initially increased and then decreased with increasing continuous cropping time. By contrast, SSC initially decreased and then increased with increasing continuous cropping time. Soil pH was the lowest in 1a and the highest in 21a. One-Way ANOVA also revealed that watermelon yield was significantly different among different continuous years, and watermelon yield sharply decreased with increasing continuous years (Fig. ). Therefore, watermelon continuous cropping for 1, 6, 11, 16 and 21 years was selected in this study. Effects of watermelon continuous cropping on soil bacterial and fungal compositions A total of 256,115 and 687,385 high-quality bacterial and fungal sequences were identified across six treatments, respectively, and classified into 6,435 and 1,349 bacterial and fungal OTUs, respectively. Across all samples, the dominant genera (average relative abundance > 1.0%) of soil bacterial communities were MND (4.41%), Rubrobacter (1.87%), RB41(1.69%), Metagenome (1.66%), Roseisolibacter (1.23%), uncultured_Chloroflexi_bacterium (1.17%), Solirubrobacter (1.08%) and Sphingomonas (1.04%). Soil fungal communities were dominated by Ceratobasidium (8.32%), Fusarium (7.36%), Mortierella (7.16%), Acremonium (3.45%), Aspergillus (3.09%), Thielavia (2.81%), Stephanospora (1.97%), Glomus (1.92%), Podospora (1.83%), Stachybotrys (1.76%), Ramicandelaber (1.58%), Conocybe (1.18%), and Metarhizium (1.10%). ANOVA results further showed that the relative abundances of all genera for soil bacteria varied among different continuous cropping years ( P < 0.001, Fig. ). Except for Acremonium , Glomus and Conocybe , other 10 dominant fungal genera varied among different continuous cropping years ( P < 0.001, Fig. , Fig. S ). PERMANOVA demonstrated that at the OTU level, the species compositions of soil bacteria and fungi significantly differed among different continuous cropping years ( R 2 = 0.717 and 0.682, respectively; P < 0.001; Fig. ). Across 6,435 bacterial OTUs, five continuous cropping years only shared 1,347 OTUs. The unique bacterial OTUs detected in a single treatment was 702 for 1a, 327 for 6a, 235 for 11a, 255 for 16a, and 311 for 21a (Fig. S ). Across 1,349 fungal OTUs, five continuous cropping years only shared 348 OTUs. The unique bacterial OTUs detected in a single treatment was 145 for 1a, 48 for 6a, 67 for 11a, 46 for 16a, and 16 for 21a (Fig. S ). These results indicate that different bacterial and fungal species inhabit the soil under different continuous cropping years. Additionally, we observed that the community composition of soil bacteria and fungi exhibited differentially gradual shifts along continuous cropping duration gradients (from CK, 1a to 16a and 21a). More importantly, we also observed significant relationships between watermelon yield and the variation in species compositions of soil bacteria and fungi (Fig. ). For bacteria, the relative abundance of metagenome, Roseisolibacte and Chloroflexi_bacterium had a remarkable correlation with watermelon yield ( P < 0.05, Table S ). For fungi, the relative abundance of Ceratobasidium, Stephanospora, Podospora and Conocybe had a remarkable correlation with watermelon yield ( P < 0.05, Table S ). Direct and indirect influences of continuous cropping and soil attributes on variations in soil bacterial and fungal compositions Mantel tests showed that the compositional dissimilarities of soil bacterial and fungal communities were significantly related to the variations in soil nutrient, SSC, pH, and SM (all P < 0.01, Table ). Furthermore, we also found that soil bacterial compositional dissimilarity was more strongly related to the difference in SM (Mantel R = 0.75), whereas soil fungal compositional dissimilarity had stronger correlation with pH variation (Mantel R = 0.54). Notably, the compositional dissimilarities of soil bacteria and fungi were significantly related to differences in continuous cropping (Mantel R = 0.57 and 0.51, respectively). Fitted SEM further confirmed that soil nutrient, SM, pH, SSC, and continuous cropping jointly explained 73% and 64% of the total variations in soil bacterial and fungal community compositions (Fig. a and b). Continuous cropping had no direct influence on soil bacterial and fungal communities but can indirectly alter their community composition by affecting soil physicochemical conditions. Additionally. SSC, SM, pH, and soil nutrient had remarkable direct influences on soil bacterial and fungal community compositions. The standardized total effects derived from the SEM revealed that variation in soil bacterial community composition was predominantly driven by continuous cropping, followed by SM, pH, SPC2, and SSC, whereas soil fungal community composition was regulated by pH, continuous cropping, SPC2, SSC, SM and SPC1 (Fig. ).
The ANOVA results showed that all soil physicochemical variables significantly varied among the different continuous cropping years ( P < 0.001, Table ). Long-time continuous cropping remarkably decreased the STN, STP, SAN, SAK and SAP and remarkably increased SM. SOM initially increased and then decreased with increasing continuous cropping time. By contrast, SSC initially decreased and then increased with increasing continuous cropping time. Soil pH was the lowest in 1a and the highest in 21a. One-Way ANOVA also revealed that watermelon yield was significantly different among different continuous years, and watermelon yield sharply decreased with increasing continuous years (Fig. ). Therefore, watermelon continuous cropping for 1, 6, 11, 16 and 21 years was selected in this study.
A total of 256,115 and 687,385 high-quality bacterial and fungal sequences were identified across six treatments, respectively, and classified into 6,435 and 1,349 bacterial and fungal OTUs, respectively. Across all samples, the dominant genera (average relative abundance > 1.0%) of soil bacterial communities were MND (4.41%), Rubrobacter (1.87%), RB41(1.69%), Metagenome (1.66%), Roseisolibacter (1.23%), uncultured_Chloroflexi_bacterium (1.17%), Solirubrobacter (1.08%) and Sphingomonas (1.04%). Soil fungal communities were dominated by Ceratobasidium (8.32%), Fusarium (7.36%), Mortierella (7.16%), Acremonium (3.45%), Aspergillus (3.09%), Thielavia (2.81%), Stephanospora (1.97%), Glomus (1.92%), Podospora (1.83%), Stachybotrys (1.76%), Ramicandelaber (1.58%), Conocybe (1.18%), and Metarhizium (1.10%). ANOVA results further showed that the relative abundances of all genera for soil bacteria varied among different continuous cropping years ( P < 0.001, Fig. ). Except for Acremonium , Glomus and Conocybe , other 10 dominant fungal genera varied among different continuous cropping years ( P < 0.001, Fig. , Fig. S ). PERMANOVA demonstrated that at the OTU level, the species compositions of soil bacteria and fungi significantly differed among different continuous cropping years ( R 2 = 0.717 and 0.682, respectively; P < 0.001; Fig. ). Across 6,435 bacterial OTUs, five continuous cropping years only shared 1,347 OTUs. The unique bacterial OTUs detected in a single treatment was 702 for 1a, 327 for 6a, 235 for 11a, 255 for 16a, and 311 for 21a (Fig. S ). Across 1,349 fungal OTUs, five continuous cropping years only shared 348 OTUs. The unique bacterial OTUs detected in a single treatment was 145 for 1a, 48 for 6a, 67 for 11a, 46 for 16a, and 16 for 21a (Fig. S ). These results indicate that different bacterial and fungal species inhabit the soil under different continuous cropping years. Additionally, we observed that the community composition of soil bacteria and fungi exhibited differentially gradual shifts along continuous cropping duration gradients (from CK, 1a to 16a and 21a). More importantly, we also observed significant relationships between watermelon yield and the variation in species compositions of soil bacteria and fungi (Fig. ). For bacteria, the relative abundance of metagenome, Roseisolibacte and Chloroflexi_bacterium had a remarkable correlation with watermelon yield ( P < 0.05, Table S ). For fungi, the relative abundance of Ceratobasidium, Stephanospora, Podospora and Conocybe had a remarkable correlation with watermelon yield ( P < 0.05, Table S ).
Mantel tests showed that the compositional dissimilarities of soil bacterial and fungal communities were significantly related to the variations in soil nutrient, SSC, pH, and SM (all P < 0.01, Table ). Furthermore, we also found that soil bacterial compositional dissimilarity was more strongly related to the difference in SM (Mantel R = 0.75), whereas soil fungal compositional dissimilarity had stronger correlation with pH variation (Mantel R = 0.54). Notably, the compositional dissimilarities of soil bacteria and fungi were significantly related to differences in continuous cropping (Mantel R = 0.57 and 0.51, respectively). Fitted SEM further confirmed that soil nutrient, SM, pH, SSC, and continuous cropping jointly explained 73% and 64% of the total variations in soil bacterial and fungal community compositions (Fig. a and b). Continuous cropping had no direct influence on soil bacterial and fungal communities but can indirectly alter their community composition by affecting soil physicochemical conditions. Additionally. SSC, SM, pH, and soil nutrient had remarkable direct influences on soil bacterial and fungal community compositions. The standardized total effects derived from the SEM revealed that variation in soil bacterial community composition was predominantly driven by continuous cropping, followed by SM, pH, SPC2, and SSC, whereas soil fungal community composition was regulated by pH, continuous cropping, SPC2, SSC, SM and SPC1 (Fig. ).
Long-term continuous cropping alters soil physicochemical properties The long-term continuous cropping of a single crop can induce agricultural ecosystem degeneration, including crop yield reduction, nutrient imbalance, and deterioration of soil physicochemical properties . However, no consensus has been reached on the impact of continuous cropping on soil physicochemical properties across different crop systems . In this study, we observed that soil nutrient, pH, salinity, and moisture content considerably varied across the different continuous cropping years. Different from previous reports that long-term continuous cropping can improve soil nutrient contents [ , , ], we found that soil nutrient contents obviously declined with the increase in continuous cropping duration. A previous study reported that the continuous cropping of cropped banana increases soil pH . By contrast, we found that soil pH decreased in la and then increased with the increase in the duration of watermelon continuous cropping. Liu et al. and Zhong et al. observed that soil organic carbon is high in continuous cropping treatment, whereas our results showed that SOM increased in short-term treatments (la and 6a) and then decreased in long-term continuous cropping (11a, 16a, and 21a). In addition, soil moisture content substantially increased in the continuous cropping treatment, which may be because long-term irrigation and crop cover decreased soil water evaporation (Table ). Together, these findings suggest that watermelon continuous cropping will substantially alter the soil conditions in a gravel mulch field, but its impact on different soil physicochemical properties varies remarkably. Long-term continuous cropping alters soil bacterial and fungal compositions by affecting soil physicochemical properties A large number of previous studies have reported that long-term continuous cropping significantly changes soil microbial composition [ , , ]. In agreement with previous studies , our results demonstrated that bacterial and fungal community compositions varied considerably among different durations of watermelon continuous cropping. We also observed significant variation in dominant bacterial and fungal genera along the gradient of continuous cropping year, which is consistent with previous findings . However, we also found different responses of fungal and bacterial genera to continuous cropping years. For instance, the relative abundance of bacterial MND 1 and fungal Ceratobasidium increased with increasing continuous cropping years, while that of bacterial Rubrobacter and Solirubrobacter , and fungal Fusarium showed opposite trends, suggesting that although long-term continuous cropping will alter soil microbial composition, its effect varies between microbial taxa. Interestingly, soil bacterial composition was more strongly altered by continuous cropping rather than soil fungal composition. Long-term continuous cropping gradually reduced soil nutrient contents and altered soil pH, organic carbon and salinity. Functional traits could mediate species fitness and performance . Soil bacteria and fungi need to compete for similar resources; fungi have stronger capacity to decompose complex molecules than bacteria . Moreover, soil fungi can maintain community stability by generating multiple mutualism (e.g., mycorrhizae and rhizobia) with crops [ – ]. Hence, soil fungi may have greater tolerance and adaptability to the variation in soil physicochemical properties than bacteria. As a result, continuous cropping can have a strong effect on soil bacterial composition. Additionally, we observed different shifts in soil bacterial and fungal compositions across different durations of continuous cropping. This result is partly because the major soil factors that drive soil bacterial and fungal compositions changed differently across different durations of continuous cropping. SEM revealed that long-term continuous cropping had no direct influence on soil bacterial and fungal compositions, it could alter soil bacterial and fungal species compositions by affecting soil conditions. Soil factors, such as soil pH and nutrient, are the major factors that drive soil microbial community composition [ – ]. However, the relative influence of different soil factors on microbial composition differed among bacterial and fungal communities . Mantel test and SEM together confirmed that variation in soil bacterial composition mainly driven by soil moisture content, followed by soil pH, nutrient, and salinity. By contrast, soil fungal composition was controlled by soil pH, followed by nutrient, salinity, and moisture content. Expectedly, soil moisture content determined the community composition of soil bacteria because water availability drives biodiversity and ecosystem functioning . Notably, we found that soil bacterial composition was more influenced by soil moisture content than the soil fungal. This may be due to the fact that fungal hyphae facilitate access to soil water , and their chitinous cell walls increase their resistance to the variation in soil moisture content . Moreover, soil pH also influences soil microbial assembly . Soil pH is a key determinant of soil fungal community composition . Therefore, soil pH plays an important role in shaping soil fungal composition in gravel mulch field. We also observed that soil pH and moisture content influence bacterial and fungal compositions, respectively. Our result was partly consistent with traditional viewpoints . Additionally, soil salinity is considered the key driver of soil microbial communities . Our findings also showed that soil salinity content plays a role in altering soil bacterial and fungal compositions. These results indicate that soil fungi and bacteria have different responses to the variation in each soil factor caused by long-term continuous cropping. Together, our study provides empirical evidence that long-term continuous cropping of watermelon alters soil bacterial and fungal compositions mainly by affecting soil physicochemical properties. Changing community composition of soil bacteria and fungi leads to a decline in watermelon yield Numerous studies have reported that long-term continuous cropping leads to alterations in soil microbial composition, and crop yield reduction [ , , ]. However, little is known about the influence of soil microbial changes on crop yield reduction. In this study, we observed significant relationships between watermelon yield reduction and the variation in OTU-level compositions of soil bacteria and fungi. More importantly, we also found that the relative abundance of bacterial Metagenome was positive related to watermelon yield. An increase in the relative abundance of fungal Ceratobasidium and Stephanospo ra, and a decrease in that of Podospora led to watermelon yield reduction, indicating that long-term continuous cropping may decrease watermelon yield by changing soil microbial composition, especially by disturbing the balance between beneficial and pernicious microorganisms . In this study, we only analysed the taxonomic composition of soil bacteria and fungi. Future research should explore the key functional taxa that improve soil quality and increase watermelon yield through functional-annotations and phylogenomics , and take the combination of bio-organic fertilizers, crop rotation and functional microorganisms into account to effectively prevent soil degradation and promote crop growth [ , , ].
The long-term continuous cropping of a single crop can induce agricultural ecosystem degeneration, including crop yield reduction, nutrient imbalance, and deterioration of soil physicochemical properties . However, no consensus has been reached on the impact of continuous cropping on soil physicochemical properties across different crop systems . In this study, we observed that soil nutrient, pH, salinity, and moisture content considerably varied across the different continuous cropping years. Different from previous reports that long-term continuous cropping can improve soil nutrient contents [ , , ], we found that soil nutrient contents obviously declined with the increase in continuous cropping duration. A previous study reported that the continuous cropping of cropped banana increases soil pH . By contrast, we found that soil pH decreased in la and then increased with the increase in the duration of watermelon continuous cropping. Liu et al. and Zhong et al. observed that soil organic carbon is high in continuous cropping treatment, whereas our results showed that SOM increased in short-term treatments (la and 6a) and then decreased in long-term continuous cropping (11a, 16a, and 21a). In addition, soil moisture content substantially increased in the continuous cropping treatment, which may be because long-term irrigation and crop cover decreased soil water evaporation (Table ). Together, these findings suggest that watermelon continuous cropping will substantially alter the soil conditions in a gravel mulch field, but its impact on different soil physicochemical properties varies remarkably.
A large number of previous studies have reported that long-term continuous cropping significantly changes soil microbial composition [ , , ]. In agreement with previous studies , our results demonstrated that bacterial and fungal community compositions varied considerably among different durations of watermelon continuous cropping. We also observed significant variation in dominant bacterial and fungal genera along the gradient of continuous cropping year, which is consistent with previous findings . However, we also found different responses of fungal and bacterial genera to continuous cropping years. For instance, the relative abundance of bacterial MND 1 and fungal Ceratobasidium increased with increasing continuous cropping years, while that of bacterial Rubrobacter and Solirubrobacter , and fungal Fusarium showed opposite trends, suggesting that although long-term continuous cropping will alter soil microbial composition, its effect varies between microbial taxa. Interestingly, soil bacterial composition was more strongly altered by continuous cropping rather than soil fungal composition. Long-term continuous cropping gradually reduced soil nutrient contents and altered soil pH, organic carbon and salinity. Functional traits could mediate species fitness and performance . Soil bacteria and fungi need to compete for similar resources; fungi have stronger capacity to decompose complex molecules than bacteria . Moreover, soil fungi can maintain community stability by generating multiple mutualism (e.g., mycorrhizae and rhizobia) with crops [ – ]. Hence, soil fungi may have greater tolerance and adaptability to the variation in soil physicochemical properties than bacteria. As a result, continuous cropping can have a strong effect on soil bacterial composition. Additionally, we observed different shifts in soil bacterial and fungal compositions across different durations of continuous cropping. This result is partly because the major soil factors that drive soil bacterial and fungal compositions changed differently across different durations of continuous cropping. SEM revealed that long-term continuous cropping had no direct influence on soil bacterial and fungal compositions, it could alter soil bacterial and fungal species compositions by affecting soil conditions. Soil factors, such as soil pH and nutrient, are the major factors that drive soil microbial community composition [ – ]. However, the relative influence of different soil factors on microbial composition differed among bacterial and fungal communities . Mantel test and SEM together confirmed that variation in soil bacterial composition mainly driven by soil moisture content, followed by soil pH, nutrient, and salinity. By contrast, soil fungal composition was controlled by soil pH, followed by nutrient, salinity, and moisture content. Expectedly, soil moisture content determined the community composition of soil bacteria because water availability drives biodiversity and ecosystem functioning . Notably, we found that soil bacterial composition was more influenced by soil moisture content than the soil fungal. This may be due to the fact that fungal hyphae facilitate access to soil water , and their chitinous cell walls increase their resistance to the variation in soil moisture content . Moreover, soil pH also influences soil microbial assembly . Soil pH is a key determinant of soil fungal community composition . Therefore, soil pH plays an important role in shaping soil fungal composition in gravel mulch field. We also observed that soil pH and moisture content influence bacterial and fungal compositions, respectively. Our result was partly consistent with traditional viewpoints . Additionally, soil salinity is considered the key driver of soil microbial communities . Our findings also showed that soil salinity content plays a role in altering soil bacterial and fungal compositions. These results indicate that soil fungi and bacteria have different responses to the variation in each soil factor caused by long-term continuous cropping. Together, our study provides empirical evidence that long-term continuous cropping of watermelon alters soil bacterial and fungal compositions mainly by affecting soil physicochemical properties.
Numerous studies have reported that long-term continuous cropping leads to alterations in soil microbial composition, and crop yield reduction [ , , ]. However, little is known about the influence of soil microbial changes on crop yield reduction. In this study, we observed significant relationships between watermelon yield reduction and the variation in OTU-level compositions of soil bacteria and fungi. More importantly, we also found that the relative abundance of bacterial Metagenome was positive related to watermelon yield. An increase in the relative abundance of fungal Ceratobasidium and Stephanospo ra, and a decrease in that of Podospora led to watermelon yield reduction, indicating that long-term continuous cropping may decrease watermelon yield by changing soil microbial composition, especially by disturbing the balance between beneficial and pernicious microorganisms . In this study, we only analysed the taxonomic composition of soil bacteria and fungi. Future research should explore the key functional taxa that improve soil quality and increase watermelon yield through functional-annotations and phylogenomics , and take the combination of bio-organic fertilizers, crop rotation and functional microorganisms into account to effectively prevent soil degradation and promote crop growth [ , , ].
This study conducted a comprehensive comparison of the influence of continuous cropping on soil bacterial and fungal compositions and summarized the variations in soil factors caused by continuous cropping that drove shifts in soil bacterial and fungal compositions. Our results observed that the community composition of soil bacteria and fungi were remarkably altered by continuous cropping in gravel mulch field. SEM further demonstrated that continuous cropping indirectly altered soil bacterial and fungal compositions by causing remarkable variations in soil attributes. In addition, soil bacterial and fungal compositions were driven by variations in soil moisture content and pH caused by continuous cropping, respectively. As a result, soil bacterial and fungal communities exhibited differential compositional shifts across different years of continuous cropping. Additionally, the variation in soil bacterial and fungal composition had a significant correlation with watermelon yield reduction. Together, our findings provide first-hand evidence that long-term continuous cropping of watermelon alters soil bacterial and fungal compositions mainly by affecting soil physicochemical properties in gravel mulch fields.
Additional file 1.
|
Fluorescent Ligand Equilibrium Displacement: A High-Throughput Method for Identification of FMN Riboswitch-Binding Small Molecules | 68962938-3b68-4478-8b85-3d98bd874a5d | 11154562 | Pharmacology[mh] | Worldwide, pathogenic bacterial infections killed 7.7 million people in 2019 . Despite the continued development of antibiotics since the discovery of penicillin in 1928 and the wide availability of antibiotics, bacterial infections cause 13.6% of deaths globally . As a result, bacterial infections continue to be one of the most significant health concerns, especially considering the rise in resistance to current antibacterial or antibiotic drugs . According to a 2019 report by the Centers for Disease Control and Prevention, antibiotic development has stagnated, with only 32 antibiotics under development for bacteria that pose the greatest threats to human health. Out of these, six molecules have been classified as innovative, representing new chemical classes, novel mechanisms of action, or absence of identified cross-resistance . Identifying new targets for antibiotic development, developing novel methods to identify molecular scaffolds with antimicrobial properties, and proper antimicrobial stewardship worldwide are imperative. Current antibiotics target bacterial proteins or ribosomes . In the last 10 years, structured RNA molecules that are not part of the ribosome have joined the discussion as potential antibiotic or antimicrobial targets . The validity of RNA as a novel antibiotic target is supported by research that revealed ribosome-targeting antibiotics that bind specifically to rRNA, not ribosome accessory proteins . This illustrates that RNA-binding small molecules (SMs) can be selective for their targets and can be used with minimal off-target effects. An emerging antibacterial target class is bacterial riboswitches, structured RNA elements that regulate the transcription of specific genes or translation of specific gene products . While riboswitches are present in all domains of life, they are over-represented in bacteria . Most riboswitches are cis-regulatory, with the riboswitch sequence located 5′ of the gene it regulates; typically, their ligand is intrinsically related to the genes 3′ of the riboswitch . Most WHO-priority pathogens, bacteria that pose the greatest threats to human health, contain riboswitches that regulate key biosynthetic pathways . In 2015, ribocil, a compound that inhibited bacterial growth by binding to a riboswitch, was identified using a phenotypic screen. This discovery caused a surge of interest in riboswitches as an antibacterial target . However, we have not seen riboswitch-specific drug candidates progress past early laboratory mouse trials despite some promising results . Such work indicates a need to discover new compounds with the potential for development as antibacterial drugs. To discover compound structures, or scaffolds, that can aid in creating new antibiotics through medicinal chemistry refinement, we have established a cell-free, in vitro method to efficiently and rapidly screen SMs that interact with the flavin mononucleotide riboswitch. Flavin mononucleotide (FMN) plays a crucial role as a cofactor in numerous biosynthetic pathways and serves as a common ligand for a specific riboswitch class . FMN riboswitch (FRS) sequences are highly conserved and are present in in 41 out of 49 priority bacterial classes, making them compelling targets for antibiotic development . Additionally, given the already existing foundational FRS biochemical characterization and identification of non-native ligands that can target FRS, we can utilize these previously identified biomolecules as controls to validate our experimental design . Assays previously used to identify novel riboswitch ligands include cell-based growth assays, detectable reporter systems, and methods that directly monitor ligand binding or riboswitch cellular function. The most common techniques to identify any antibiotic compound are cell-based assays, specifically those searching for a phenotype rather than a specific interaction, like the bacterial growth inhibition assay used to discover ribocil . Other cellular methods utilize detectable reporters specifically designed for the riboswitch of interest. These systems typically have a fluorescent or bioluminescent protein sequence downstream from the riboswitch to monitor changes in fluorescence or luminescence as a measure of riboswitch activation or inactivation . While these systems are robust, they typically require specific bacterial strains and growing conditions, and they often fail to identify compounds that possess activity against the target but are impermeable to the cellular membrane or prone to efflux. These overlooked molecules could potentially be optimized for activity in bacterial cells and, therefore, represent a potential source of additional chemical starting points. Methods involving labeled RNA often utilize a fluorescently tagged RNA scaffold that produces a signal change upon interaction with a ligand . This method requires detailed knowledge of the three-dimensional structure of the RNA and extensive controls to ensure that the fluorescent tags do not interfere with the nascent interactions within the RNA or between the RNA and ligand. Even with careful planning, the presence of the label may bias the results. Similarly, fluorescently labeled ligands have been used for some riboswitches to monitor the displacement of a native ligand in the presence of other potential binding partners . Such a process can be complex since it requires large amounts of the fluorescent ligand and knowledge of the binding mode of the native ligand to design the fluorescent probe molecule. Due to these limitations, we leveraged the inherent fluorescence of FMN to develop Fluorescent Ligand Equilibrium Displacement (FLED) as a high-throughput, label-free method to identify novel, structurally distinct molecules that bind to the FRS . Hits from FLED screening can be further developed into antibacterial compounds or sensor molecules for fundamental chemistry research. 2.1. Considerations for HT Riboswitch Screening (Assay Principle) In order to rapidly screen for SMs that bind to FRS, it is necessary to consider the scope and context of the screening. The assay was developed to be completely in vitro and cell-free to maximize the number of compounds we could screen, to reduce cost of the assay, and to identify as many hit SMs as possible. As an alternative to using label-based approaches , we chose a specific riboswitch with an intrinsically fluorescent native ligand, FMN. Upon interaction with the FRS, forming an FMN–RNA complex, the fluorescence of FMN is modestly quenched. This property of FMN allows for all screening to be based on the increase in FMN fluorescence when it is unbound to the riboswitch . Although other groups have used this property to monitor the competition of FMN with ligands , the method has not been optimized as a primary high-throughput screening method. Pure FRS can be easily and efficiently created using traditional in vitro transcription, and both FMN as well as the positive control compound ribocil are commercially available. By using small volumes and relatively inexpensive reagents, this assay is quite cost-effective. Additionally, optimized incubation times and rapid plate reading allow for multiple assays to be performed in parallel. Taken together, this assay is highly scalable for a variety of laboratory environments. 2.2. Fluorescent Ligand Equilibrium Displacement Development and Assay Optimization Through systematic testing of FMN to FRS ratios and RNA concentrations, the optimum background and signal increase upon unbinding was achieved using a 1:2 stoichiometric ratio, FMN to FRS, with a final concentration of 1.5 μ M FRS . The determined ratio and concentration decreased the baseline fluorescence and lowered the standard deviation between samples in the experimental assay. We tested multiple well plates and sample volumes, and the best results with the lowest sample requirements were achieved using 384-well microplates (Corning, Product number 4514). well-plates and a sample volume of 10 μ L. For our positive control, we chose to use the modified compound ribocil-C due to its increased binding affinity compared to ribocil . In all further writing, ribocil refers to ribocil-C. Primary and validation screens were performed using an SM or positive control concentration of 10 μ M. Once the sample concentrations were determined, the difference in fluorescence signal was further optimized by varying the incubation time of the compounds with the FMN–RNA complex . In order to minimize the effect of RNA degradation through magnesium-catalyzed RNA hydrolysis, we chose to limit incubation times to 60 min. We verified that samples incubated for this time range exhibited minimal RNA degradation . To determine the minimal incubation time required for acceptable assay performance, we measured the change in fluorescence between positive and negative controls and calculated the assay Z’ at various incubation times under one hour . Z’ represents the statistical difference between positive and negative controls while taking the deviation of each into account and is a measure of assay robustness . Because incubation times greater than 15 min showed a Z’ value well above 0.5, the standard Z’ cutoff for high-throughput assays, we chose an incubation time between 30 and 45 min, with an average incubation time of 37 min, for all subsequent experiments. Incubation time was not increased beyond 60 min due to the time required for the system to reach equilibrium; the longer time period increases the risk of RNA degradation and subsequent false positives. DMSO is known to have a mild denaturing effect on RNA , which could limit the ability of the assay to tolerate high DMSO concentrations, as structural destabilization could produce a false positive due to FMN release. We, therefore, performed a DMSO tolerance test, which demonstrated that the background fluorescence was insensitive to DMSO concentrations up to 13% . These data demonstrate that the DMSO concentrations used (5% for concentration response testing and 0.5% for all other screening) were well tolerated by our assay, and the DMSO concentration could be increased further to accommodate higher SM concentrations or compounds with lower miscibility. In the work presented here, the FRS:FMN complex was plated using a manual multiple-channel repeat-dispensing pipette; however, further optimization using more automated systems, such as liquid handlers similar to Biomek FX, could be performed to expedite the experimental setup further. The experimental FLED workflow is illustrated in . 2.3. Hit Confirmation and Counter Screening To test our method using an SM library, we screened approximately 15k diverse, drug-like compounds from a collection maintained by the University of Michigan Center for Chemical Genomics. The compounds were screened across multiple days and multiple RNA preparations, with an average Z’ value of 0.805 ± 0.09, a score typically considered excellent for high-throughput screening . The screening was completed in four phases, each with specific hit thresholds chosen to maintain as many viable hit compounds as possible . Phase 1 experiments are intended to identify any molecules that interact with FRS, but with a higher margin of error. Phase 2 experiments remove experimental false positives and separate compounds that may posses properties that make them incompatible with binding analysis by FLED. Phase 3 identifies and removes compounds that are sufficiently intrinsically fluorescent to be unable to identify any unbinding of FMN from the FRS. Phase 4 assesses if compounds show an SM concentration- or dose-mediated unbinding of FMN. Specifically, Phase 1 consisted of single-replicate screening of each compound using FLED. Any compounds meeting the hit criterion (a fluorescence signal at least three standard deviations above that of the average negative control) were advanced to Phase 2 of screening . In Phase 2, each compound was retested at the same concentration in triplicate or quadruplicate in order to account for the inherent variability in high-throughput screening . The hit criterion and the Z’ calculations were chosen based on the negative control rather than the mean of all samples in order to avoid the contributions of false positives due to inherent properties of the compounds tested (such as intrinsic fluorescence or aggregation) artificially increasing the hit threshold. During this phase, compounds were sorted into two hit categories based on satisfying criteria 1A or 1B . Without further testing, molecules with fluorescent signals between criteria 1A and 2A were advanced to Phase 4. However, compounds with a signal above criterion 1B were tested in Phase 3. The high fluorescent signal in these cases could be due to intrinsic fluorescence, not equilibrium competition with FMN. Unlike Phases 1 and 2, Phase 3 tested compounds using an altered version of FLED. Each molecule was plated for a final concentration at 10 μ M as before, but the buffer without FRS or FMN was added instead of the FMN:FRS complex to each compound well. Compounds with a signal greater than the difference between positive and negative controls were flagged as intrinsically fluorescent and removed from screening, while those below the threshold advanced to Phase 4 . Using this counter screen, we removed molecules so intrinsically fluorescent that binding was undetectable using FLED while retaining intrinsically fluorescent molecules that could bind to the FRS. Compounds that fit the criteria for Phases 1–3 were considered confirmed hits and moved into Phase 4—concentration response testing. 2.4. Confirmed Hit Concentration–Response Testing In Phase 4, compounds were plated in a two-fold decreasing concentration series, from 100 μ M to 0.78 μ M . Each compound was tested in quadruplicate and analyzed using the default parameters in MScreen . Similar methods have been used previously to describe the binding of molecules to the FMN riboswitch . Compounds with non-negative Hill slopes and R squared values above 0.8 were further analyzed using Prism 10 to identify the approximate EC 50 , or half-maximal effective concentration required to reach SM specific maximum FMN release, along with improved Hill slope calculations and goodness of fit, R 2 . We restricted our fully validated hits to compounds with an R 2 value greater than 0.95 and an EC 50 value less than 30 μ M . Using these parameters, we identified 22 compounds with concentration–response using the four-phase FLED workflow . Each of these validated hits underwent further analysis using low-throughput methods such as isothermal calorimetry and transcription termination assays for efficacy beyond the FLED system. Across the 22 hits, 4 of the EC 50 values were between 1 μ M and 10 μ M, 10 were between 10 μ M and 20 μ M, and 7 were between 20 μ M and 30 μ M. In order to rapidly screen for SMs that bind to FRS, it is necessary to consider the scope and context of the screening. The assay was developed to be completely in vitro and cell-free to maximize the number of compounds we could screen, to reduce cost of the assay, and to identify as many hit SMs as possible. As an alternative to using label-based approaches , we chose a specific riboswitch with an intrinsically fluorescent native ligand, FMN. Upon interaction with the FRS, forming an FMN–RNA complex, the fluorescence of FMN is modestly quenched. This property of FMN allows for all screening to be based on the increase in FMN fluorescence when it is unbound to the riboswitch . Although other groups have used this property to monitor the competition of FMN with ligands , the method has not been optimized as a primary high-throughput screening method. Pure FRS can be easily and efficiently created using traditional in vitro transcription, and both FMN as well as the positive control compound ribocil are commercially available. By using small volumes and relatively inexpensive reagents, this assay is quite cost-effective. Additionally, optimized incubation times and rapid plate reading allow for multiple assays to be performed in parallel. Taken together, this assay is highly scalable for a variety of laboratory environments. Through systematic testing of FMN to FRS ratios and RNA concentrations, the optimum background and signal increase upon unbinding was achieved using a 1:2 stoichiometric ratio, FMN to FRS, with a final concentration of 1.5 μ M FRS . The determined ratio and concentration decreased the baseline fluorescence and lowered the standard deviation between samples in the experimental assay. We tested multiple well plates and sample volumes, and the best results with the lowest sample requirements were achieved using 384-well microplates (Corning, Product number 4514). well-plates and a sample volume of 10 μ L. For our positive control, we chose to use the modified compound ribocil-C due to its increased binding affinity compared to ribocil . In all further writing, ribocil refers to ribocil-C. Primary and validation screens were performed using an SM or positive control concentration of 10 μ M. Once the sample concentrations were determined, the difference in fluorescence signal was further optimized by varying the incubation time of the compounds with the FMN–RNA complex . In order to minimize the effect of RNA degradation through magnesium-catalyzed RNA hydrolysis, we chose to limit incubation times to 60 min. We verified that samples incubated for this time range exhibited minimal RNA degradation . To determine the minimal incubation time required for acceptable assay performance, we measured the change in fluorescence between positive and negative controls and calculated the assay Z’ at various incubation times under one hour . Z’ represents the statistical difference between positive and negative controls while taking the deviation of each into account and is a measure of assay robustness . Because incubation times greater than 15 min showed a Z’ value well above 0.5, the standard Z’ cutoff for high-throughput assays, we chose an incubation time between 30 and 45 min, with an average incubation time of 37 min, for all subsequent experiments. Incubation time was not increased beyond 60 min due to the time required for the system to reach equilibrium; the longer time period increases the risk of RNA degradation and subsequent false positives. DMSO is known to have a mild denaturing effect on RNA , which could limit the ability of the assay to tolerate high DMSO concentrations, as structural destabilization could produce a false positive due to FMN release. We, therefore, performed a DMSO tolerance test, which demonstrated that the background fluorescence was insensitive to DMSO concentrations up to 13% . These data demonstrate that the DMSO concentrations used (5% for concentration response testing and 0.5% for all other screening) were well tolerated by our assay, and the DMSO concentration could be increased further to accommodate higher SM concentrations or compounds with lower miscibility. In the work presented here, the FRS:FMN complex was plated using a manual multiple-channel repeat-dispensing pipette; however, further optimization using more automated systems, such as liquid handlers similar to Biomek FX, could be performed to expedite the experimental setup further. The experimental FLED workflow is illustrated in . To test our method using an SM library, we screened approximately 15k diverse, drug-like compounds from a collection maintained by the University of Michigan Center for Chemical Genomics. The compounds were screened across multiple days and multiple RNA preparations, with an average Z’ value of 0.805 ± 0.09, a score typically considered excellent for high-throughput screening . The screening was completed in four phases, each with specific hit thresholds chosen to maintain as many viable hit compounds as possible . Phase 1 experiments are intended to identify any molecules that interact with FRS, but with a higher margin of error. Phase 2 experiments remove experimental false positives and separate compounds that may posses properties that make them incompatible with binding analysis by FLED. Phase 3 identifies and removes compounds that are sufficiently intrinsically fluorescent to be unable to identify any unbinding of FMN from the FRS. Phase 4 assesses if compounds show an SM concentration- or dose-mediated unbinding of FMN. Specifically, Phase 1 consisted of single-replicate screening of each compound using FLED. Any compounds meeting the hit criterion (a fluorescence signal at least three standard deviations above that of the average negative control) were advanced to Phase 2 of screening . In Phase 2, each compound was retested at the same concentration in triplicate or quadruplicate in order to account for the inherent variability in high-throughput screening . The hit criterion and the Z’ calculations were chosen based on the negative control rather than the mean of all samples in order to avoid the contributions of false positives due to inherent properties of the compounds tested (such as intrinsic fluorescence or aggregation) artificially increasing the hit threshold. During this phase, compounds were sorted into two hit categories based on satisfying criteria 1A or 1B . Without further testing, molecules with fluorescent signals between criteria 1A and 2A were advanced to Phase 4. However, compounds with a signal above criterion 1B were tested in Phase 3. The high fluorescent signal in these cases could be due to intrinsic fluorescence, not equilibrium competition with FMN. Unlike Phases 1 and 2, Phase 3 tested compounds using an altered version of FLED. Each molecule was plated for a final concentration at 10 μ M as before, but the buffer without FRS or FMN was added instead of the FMN:FRS complex to each compound well. Compounds with a signal greater than the difference between positive and negative controls were flagged as intrinsically fluorescent and removed from screening, while those below the threshold advanced to Phase 4 . Using this counter screen, we removed molecules so intrinsically fluorescent that binding was undetectable using FLED while retaining intrinsically fluorescent molecules that could bind to the FRS. Compounds that fit the criteria for Phases 1–3 were considered confirmed hits and moved into Phase 4—concentration response testing. In Phase 4, compounds were plated in a two-fold decreasing concentration series, from 100 μ M to 0.78 μ M . Each compound was tested in quadruplicate and analyzed using the default parameters in MScreen . Similar methods have been used previously to describe the binding of molecules to the FMN riboswitch . Compounds with non-negative Hill slopes and R squared values above 0.8 were further analyzed using Prism 10 to identify the approximate EC 50 , or half-maximal effective concentration required to reach SM specific maximum FMN release, along with improved Hill slope calculations and goodness of fit, R 2 . We restricted our fully validated hits to compounds with an R 2 value greater than 0.95 and an EC 50 value less than 30 μ M . Using these parameters, we identified 22 compounds with concentration–response using the four-phase FLED workflow . Each of these validated hits underwent further analysis using low-throughput methods such as isothermal calorimetry and transcription termination assays for efficacy beyond the FLED system. Across the 22 hits, 4 of the EC 50 values were between 1 μ M and 10 μ M, 10 were between 10 μ M and 20 μ M, and 7 were between 20 μ M and 30 μ M. 3.1. RNA In Vitro Synthesis and Purification In vitro transcribed FRS was prepared using T7 RNA polymerase . The T7 polymerase was purified in-house using established protocols . DNA template oligos were purchased from Integrative DNA Technologies (IDT) as single-stranded DNA . The template consisted of the antisense sequence of the RNA desired, with the first two nucleotides replaced with 2’ O-methylated nucleotides to improve RNA 3’ end homogeneity . Before transcription, the DNA template was mixed with DNA oligo coding for the T7 promoter sequence and allowed to sit at room temperature for at least two minutes. DNA template and T7 promoter oligo final concentration was 0.1 μ M each. Transcription reactions were performed in 40 mM tris(hydroxymethyl)aminomethane (Tris) pH 8, 0.01% Triton-X, 30 mM magnesium chloride (MgCl 2 ), 7.11 mM ATP, 7.71 mM CTP, 10.07 mM GTP, 7.11 mM UTP, 10 mM DTT, 2 mM spermidine, 0.5 U/mL inorganic pyrophosphatase (purchased from ThermoFisher Scientific), 3% dimethyl sulfoxide (DMSO), and 0.7 μM T7 RNA polymerase. Reactions were allowed to proceed for 3.75 to 4 hours at 37 °C while being shaken at 300 rpm and then quenched by adding EDTA, pH 8.0, to a final concentration of 60 mM. Samples were flash-frozen and then stored at −80 °C overnight before purification by size exclusion chromatography. Transcription reactions were thawed, and then the buffer was exchanged using Amicon 30 kDa molecular weight cutoff filters into the SEC buffer (5 M Urea, 90 mM Tris base pH 7, 90 mM boric acid, 2 mM EDTA). Samples were concentrated to an approximately 0.5 mL volume, which is appropriate for injection onto a Fast Pressure Liquid Chromatography (FPLC) system. Before direct injection, samples were filtered through 0.22 µm SpinX filters, then injected onto an equilibrated Superdex 200 Increase 10/300 column (GE Healthcare, Chicago, IL, USA, now Cytiva). Samples were isocratically eluted at a flow rate of 0.3–0.4 mL/min at 4 °C and collected into 0.4 mL fractions. Before pooling and buffer exchange into storage buffer, samples within the major RNA product peak, around 13 mL, were analyzed on a 12% denaturing polyacrylamide gel to ensure the FRS RNA was of the correct molecular weight and free of nucleotide contamination. Samples of appropriate purity were pooled. Then, the buffer was exchanged into 50 mM Tris pH 6.5, 50 mM boric acid, and 150 mM potassium chloride (KCl) using a fresh equilibrated Amicon 30 kDa spin concentrator and stored at −80 °C. 3.2. Refolding and FMN Complex Formation All procedures involving FMN were performed in low-light conditions to prevent photobleaching. All FMN solutions were prepared fresh from powder (Sigma Aldrich, St. Louis, MO, USA). Frozen FRS RNA was thawed gently on ice, was heat denatured at 90 °C for two minutes, and was then immediately diluted into a room temperature solution of 50 mM Tris pH 6.5, 50 mM boric acid, 150 mM KCl, 3.75 μ M FMN, and 10 mM MgCl 2 in a foil-wrapped tube, to an FRS concentration of 7.5 µM. The FRS was allowed to fold at 37 °C for 20 min, then further diluted in 50 mM Tris pH 6.5, 50 mM boric acid, and 150 mM KCl to a final concentration of 1.5 μ M FRS, 0.75 μ M FMN, and 2 mM MgCl 2 . This solution was incubated at 23 °C (room temperature) for 20 min before use in assays. 3.3. Fluorescent Ligand Equilibrium Displacement (FLED) All procedures involving FMN were performed in low-light conditions to prevent photobleaching. All FMN solutions were prepared fresh from powder (Sigma Aldrich). During FRS folding, compounds dissolved in DMSO were spotted on Corning 4514 low-volume black plates using an Echo 655 Acoustic Liquid Handler. Primary screening was performed with n = 1 using stock compounds of 2 mM for a final concentration of 10 μ M and a DMSO concentration of 0.5%. Folded FRS solution (10 μ L) was added by hand using a 12-channel repeat-dispensing pipette. Plates were then shaken at 300 rpm on a Thermo Multidrop Combi for 2–3 s, spun down at 201 xg for one minute using a swing bucket centrifuge, then incubated at room temperature for, on average, 37 min (between 30 and 43 min) for an individual plate. Plates were scanned using the BMG PHERAstar plate reader. The gain and volume on each plate were adjusted to a positive control well and scanned at 485 nm, excitation 520 nm, 30 flashes per well. Compounds with fluorescence increase above the hit threshold ( μ D M S O + 3 ( σ D M S O ) ) were counted as hits and subjected to secondary testing. Secondary testing was completed in the same manner as above but with compounds plated in triplicate. Compounds that showed an average fluorescent signal increase above the hit threshold ( μ D M S O + 3 ( σ D M S O ) ) and below the intrinsic fluorescent threshold ( μ r i b o c i l − 4 ( σ r i b o c i l ) ) were then used for dose–response testing. Compounds with average fluorescent signal above the intrinsic fluorescent threshold ( μ r i b o c i l − 4 ( σ r i b o c i l ) ) were subjected to counter-screening to determine if intrinsic fluorescence was high enough to invalidate the previous testing. Compounds were spotted onto plates, 50 nL each, as before, but in quadruplicate. Instead of adding 10 μ L of FRS solution, buffer was added to each compound well. Any compounds with average signal above the fluorescent threshold ( μ r i b o c i l − μ D M S O ) were removed from further testing. For this assay, compound fluorescence was normalized to the background fluorescence of the sample buffer containing 0.5% DMSO. Dose–response testing was completed in the same manner as above, but the selected compounds and DMSO were plated to concentrations ranging from 0.781 μ M to 100 µM on an eight-point curve . Each compound was tested at each measured concentration in triplicate. SMs with initial Hill slopes above 0, and R 2 values above 0.8, as reported by the MScreen software , were further analyzed using Prism 10 Software. Before analysis, the concentrations were converted to log concentration and all data sets normalized so that the maximum replicate signal was equal to 1 and lowest replicate signal was equal to 0. Each concentration was averaged and the standard deviation was calculated. The resulting information for each SM was analyzed using the nonlinear fit, sigmodal-dose response (variable slope) fitting analysis. The only parameter changed was constraining the baseline value to 0. Representative raw and normalized data from each phase of screening is provided for two different compounds in . In vitro transcribed FRS was prepared using T7 RNA polymerase . The T7 polymerase was purified in-house using established protocols . DNA template oligos were purchased from Integrative DNA Technologies (IDT) as single-stranded DNA . The template consisted of the antisense sequence of the RNA desired, with the first two nucleotides replaced with 2’ O-methylated nucleotides to improve RNA 3’ end homogeneity . Before transcription, the DNA template was mixed with DNA oligo coding for the T7 promoter sequence and allowed to sit at room temperature for at least two minutes. DNA template and T7 promoter oligo final concentration was 0.1 μ M each. Transcription reactions were performed in 40 mM tris(hydroxymethyl)aminomethane (Tris) pH 8, 0.01% Triton-X, 30 mM magnesium chloride (MgCl 2 ), 7.11 mM ATP, 7.71 mM CTP, 10.07 mM GTP, 7.11 mM UTP, 10 mM DTT, 2 mM spermidine, 0.5 U/mL inorganic pyrophosphatase (purchased from ThermoFisher Scientific), 3% dimethyl sulfoxide (DMSO), and 0.7 μM T7 RNA polymerase. Reactions were allowed to proceed for 3.75 to 4 hours at 37 °C while being shaken at 300 rpm and then quenched by adding EDTA, pH 8.0, to a final concentration of 60 mM. Samples were flash-frozen and then stored at −80 °C overnight before purification by size exclusion chromatography. Transcription reactions were thawed, and then the buffer was exchanged using Amicon 30 kDa molecular weight cutoff filters into the SEC buffer (5 M Urea, 90 mM Tris base pH 7, 90 mM boric acid, 2 mM EDTA). Samples were concentrated to an approximately 0.5 mL volume, which is appropriate for injection onto a Fast Pressure Liquid Chromatography (FPLC) system. Before direct injection, samples were filtered through 0.22 µm SpinX filters, then injected onto an equilibrated Superdex 200 Increase 10/300 column (GE Healthcare, Chicago, IL, USA, now Cytiva). Samples were isocratically eluted at a flow rate of 0.3–0.4 mL/min at 4 °C and collected into 0.4 mL fractions. Before pooling and buffer exchange into storage buffer, samples within the major RNA product peak, around 13 mL, were analyzed on a 12% denaturing polyacrylamide gel to ensure the FRS RNA was of the correct molecular weight and free of nucleotide contamination. Samples of appropriate purity were pooled. Then, the buffer was exchanged into 50 mM Tris pH 6.5, 50 mM boric acid, and 150 mM potassium chloride (KCl) using a fresh equilibrated Amicon 30 kDa spin concentrator and stored at −80 °C. All procedures involving FMN were performed in low-light conditions to prevent photobleaching. All FMN solutions were prepared fresh from powder (Sigma Aldrich, St. Louis, MO, USA). Frozen FRS RNA was thawed gently on ice, was heat denatured at 90 °C for two minutes, and was then immediately diluted into a room temperature solution of 50 mM Tris pH 6.5, 50 mM boric acid, 150 mM KCl, 3.75 μ M FMN, and 10 mM MgCl 2 in a foil-wrapped tube, to an FRS concentration of 7.5 µM. The FRS was allowed to fold at 37 °C for 20 min, then further diluted in 50 mM Tris pH 6.5, 50 mM boric acid, and 150 mM KCl to a final concentration of 1.5 μ M FRS, 0.75 μ M FMN, and 2 mM MgCl 2 . This solution was incubated at 23 °C (room temperature) for 20 min before use in assays. All procedures involving FMN were performed in low-light conditions to prevent photobleaching. All FMN solutions were prepared fresh from powder (Sigma Aldrich). During FRS folding, compounds dissolved in DMSO were spotted on Corning 4514 low-volume black plates using an Echo 655 Acoustic Liquid Handler. Primary screening was performed with n = 1 using stock compounds of 2 mM for a final concentration of 10 μ M and a DMSO concentration of 0.5%. Folded FRS solution (10 μ L) was added by hand using a 12-channel repeat-dispensing pipette. Plates were then shaken at 300 rpm on a Thermo Multidrop Combi for 2–3 s, spun down at 201 xg for one minute using a swing bucket centrifuge, then incubated at room temperature for, on average, 37 min (between 30 and 43 min) for an individual plate. Plates were scanned using the BMG PHERAstar plate reader. The gain and volume on each plate were adjusted to a positive control well and scanned at 485 nm, excitation 520 nm, 30 flashes per well. Compounds with fluorescence increase above the hit threshold ( μ D M S O + 3 ( σ D M S O ) ) were counted as hits and subjected to secondary testing. Secondary testing was completed in the same manner as above but with compounds plated in triplicate. Compounds that showed an average fluorescent signal increase above the hit threshold ( μ D M S O + 3 ( σ D M S O ) ) and below the intrinsic fluorescent threshold ( μ r i b o c i l − 4 ( σ r i b o c i l ) ) were then used for dose–response testing. Compounds with average fluorescent signal above the intrinsic fluorescent threshold ( μ r i b o c i l − 4 ( σ r i b o c i l ) ) were subjected to counter-screening to determine if intrinsic fluorescence was high enough to invalidate the previous testing. Compounds were spotted onto plates, 50 nL each, as before, but in quadruplicate. Instead of adding 10 μ L of FRS solution, buffer was added to each compound well. Any compounds with average signal above the fluorescent threshold ( μ r i b o c i l − μ D M S O ) were removed from further testing. For this assay, compound fluorescence was normalized to the background fluorescence of the sample buffer containing 0.5% DMSO. Dose–response testing was completed in the same manner as above, but the selected compounds and DMSO were plated to concentrations ranging from 0.781 μ M to 100 µM on an eight-point curve . Each compound was tested at each measured concentration in triplicate. SMs with initial Hill slopes above 0, and R 2 values above 0.8, as reported by the MScreen software , were further analyzed using Prism 10 Software. Before analysis, the concentrations were converted to log concentration and all data sets normalized so that the maximum replicate signal was equal to 1 and lowest replicate signal was equal to 0. Each concentration was averaged and the standard deviation was calculated. The resulting information for each SM was analyzed using the nonlinear fit, sigmodal-dose response (variable slope) fitting analysis. The only parameter changed was constraining the baseline value to 0. Representative raw and normalized data from each phase of screening is provided for two different compounds in . After screening approximately 15k compounds, the hit rate dropped from 2.6% in Phase 1 to 0.15% after Phase 4, using stringent final screening criteria. FLED identified 22 structurally distinct small molecules with a low micromolar activity, providing evidence that this system can isolate compounds that span a large chemical space, which may increase the probability of identifying compounds that are less prone to antibiotic resistance. The methodology described above has proven robust enough to identify compounds that are able to bind to the FRS despite the presence of the native ligand, FMN. Each step of FLED can be completed in under an hour, and the process from refolding to the last plate scanned can be completed for 8 plates in approximately 90 minutes using a lower-throughput sample application method. The fundamental principle of the method is grounded in the inherent properties of the interactions between FMN and the riboswitch and, therefore, could be used with other FRS sequences, beyond the F. nucleatum construct used here, with minimal optimization. The setup described above can be adjusted to higher (e.g., 1536-well) or lower (e.g., 96-well) density plate formats, or to accommodate fully automated screening systems, further increasing the quantity of compounds screened per day. Moreover, the range of acceptable incubation times allows for this technique to be used with or without rapid scanning technology such as plate readers with attached stackers and accommodates delays due to technological issues. The major parameters that may need to be adjusted are the volume of FMN:FRS complex (if using assay plates that are not optimized for low volume) and incubation time of the plates after adding complex. Also, this method could be used to screen libraries of molecular fragments and reveal fragments capable of interacting in the riboswitch-binding pocket; however, it would be unable to identify hits with very low affinity or ones that interact with cryptic binding sites. The simplicity of this method, requiring at a minimum a fluorescent plate reader, makes it tenable for both larger highly resourced laboratories and smaller research groups. This scalability is particularly useful since the majority of antibiotic research carried out today is at smaller start-up biotech companies and academic research institutions . FLED, also, could be used by labs without any automation to test natural products or newly synthesized compounds for activity on the FRS. Another advantage of this method is that mutations arise in the sequence of FRS—the new mutant riboswitches can replace the original RNA used for screening. As long as the riboswitch maintains a high-affinity binding interaction with FMN, it can be used for FLED screening. Each scaffold validated in the method can be successively improved using medicinal chemistry parameters in a systematic manner. While identification of FRS-binding small molecules could lead to the identification of novel antibiotics, the number and variety of compounds could easily be used in combination therapies. These applications spread the evolutionary pressure to develop drug resistance across multiple separate molecules. For example, it has been shown that administering multiple drugs can re-sensitize bacteria to drugs and reduce the development of resistance overall . The principle of FMN unquenching can even be applied more broadly, in systems outside of FRS, to target FMN-binding proteins in bacteria or humans and may identify compounds with therapeutic effects in diseases other than bacterial infections. FLED is a robust method with great potential for use across many research institutions and applications in multiple biologically relevant systems, beginning with bacterial riboswitches and new antimicrobial discovery. |
An autopsy case of an adult woman with Rapid-Onset Obesity with Hypoventilation, Hypothalamic, Autonomic Dysregulation, and Neuroendocrine Tumors (ROHHAD(NET)) syndrome developing nonalcoholic steatohepatitis and hepatocellular carcinoma: A case report | cdd70477-088a-4ada-9b6b-cba0e58c1f65 | 11142785 | Forensic Medicine[mh] | Rapid-onset obesity with hypothalamic dysfunction, hypoventilation, and autonomic dysregulation (ROHHAD) is a rare syndrome characterized by hyperphagia and rapid-onset weight gain that starts in early childhood, and only 100 cases have been reported. This rapid and remarkable weight gain is followed by hypothalamic manifestations with neuroendocrine deficiencies, hypoventilatory breathing abnormalities, and autonomic dysregulation. The etiology of this syndrome has not been clarified, although genetic, epigenetic, and immunological theories have been suggested. Neural crest tumor is complicated in 4% to 50% of ROHHAD syndrome cases, and hence the acronym of the disease was amended to ROHHAD(NET) syndrome in 2008. Natural history information about this rare syndrome is sparse because the reported mortality rates are high within a short period after diagnosis, at 50% to 60% usually due to hypoventilation, cardiopulmonary failure, or both. Previously, only 1 case to develop nonalcoholic steatohepatitis (NASH) and hepatocellular carcinoma (HCC), the oldest ROHHAD(NET) case, was reported. Nonalcoholic fatty liver disease (NAFLD) has grown from a relatively unknown disease to the most common cause of chronic liver disease worldwide. In fact, 25% of the world’s population is currently thought to have NAFLD. NASH is a subtype of NAFLD that can progress to cirrhosis and HCC, and NASH is already considered among the top etiologies for HCC. Herein, we report the published autopsy case of the second oldest (21 years old) patient with ROHHAD(NET) syndrome with liver cirrhosis due to NASH and HCC, who died from acute on chronic liver failure caused by accidental acute pancreatitis. A 17-year-old girl was referred to us by pediatric specialists at our institution because of her liver dysfunction. Her neonatal and infancy periods were unremarkable. She had no obese family members. Aged 1, she developed a biopsy-proven angiolipoma of 40 mm in a buttock, and a 20 mm left adrenal nodule was detected accidentally (Fig. ). She had rapid weight gain and was diagnosed with growth disturbance at 3 years old. Following the results of hormonal tests, hypothalamic and pituitary disturbances were indicated, particularly in growth hormone (GH). At 6 years old, she was diagnosed with sleep apnea syndrome, and biphasic positive airway pressure (BiPAP) was initiated from 7 years old. Additionally, at 9 years old, she was diagnosed with so-called adipsic hypernatremia. Her marked weight gain had not improved regardless of diet educational programs and frequent admissions for diet therapy. According to these clinical courses and findings, she was diagnosed with ROHHAD(NET) syndrome. At 12 years old, mild brain inflammation was suspected by single photon emission computed tomography, and immunosuppressive therapy with cyclosporine was performed for a year. Mild elevations (30–50 U/L) in alanine aminotransferase (ALT) and aspartate aminotransferase were present from 4 years old, and moderate (50–150 U/L) elevations continued from 7 to 11 years old. At the consultation, her body height and weight were 134.1 cm and 107 kg (body mass index: 59), respectively. Hepatic cirrhosis was considered with thrombocytopenia, coagulopathy, hypoalbuminemia, and hyperammonemia (Table ), and morphological changes and dilated collateral veins were detected by computed tomography (CT). Liver parenchyma was retrospectively shown as isodensity at 4 years old (Fig. A, a), and marked hypodensity at 6 and 8 years old (Fig. A, b and c). The liver seemed to be atrophic and cirrhotic changes were already evident at 15 years old (Fig. A, d), and the change was more remarkable at this consultation. Liver density at 15 and 17 years old (Fig. A, d and e) was almost same as the spleen, and “burned-out NASH” was suspected. Serologically (Table ), hepatitis C antibody, hepatitis B surface antigen, and antinuclear antibody were negative. Ultrasound (US)-guided liver biopsy was performed, and liver cirrhosis with moderate inflammation and mild macrovesicular steatosis were observed histologically (Fig. B). From these findings, the patient was diagnosed with liver cirrhosis due to NASH and calculated as Child–Pugh B (score 8). Although GH supplementation was performed for a year, the efficacy was unclear. At 19 years old, a small liver nodule was detected by enhanced CT and magnetic resonance imaging (MRI) (Fig. A) on the liver surface of the left lobe. The nodule grew gradually and was diagnosed as HCC. Percutaneous radiofrequency ablation (RFA) was difficult because the nodule was located at the liver surface and US could not well detect the nodule. Surgical resection, laparoscopic RFA, and liver transplantation were considered as contraindications because general anesthesia was difficult due to her short neck, complicated asthma, and severe obesity. On angiography (Fig. B), the tumor was small, and the feeding arteries were unclear, and the transarterial chemoembolization (TACE) was poorly performed. Although she was transferred to another hospital with a transplantation center, transplantation was not indicated and thus irradiation therapy was performed. As a result, the tumor shrank in size, and deterioration of hepatic function was not observed. She was discharged and lived her daily life fairly well for a year. At age 21, severe acute pancreatitis, marked electrolyte disorder, and hypovolemic shock suddenly developed, and she was admitted to an intensive care unit. Although the acute pancreatitis improved, she died from hepatic failure and pulmonary hemorrhage complicated by pulmonary edema after 20 days. Autopsy findings showed that severe liver cirrhosis was present, while steatosis in the liver was not prominent (Fig. A and B). The cancer cells of treated HCC were mostly necrotized and replaced by regenerated liver tissue (Fig. C and D). A 15 mm-sized ganglioneuroma, which characterizes this syndrome, was found in the left adrenal gland (Fig. A and B), and could not had been confirmed histologically antemortem. No obvious inflammation of the pituitary gland or hypothalamus was evident (Fig. C and D). The pancreatitis was not so prominent and seemed to be already recovered. Findings of marked pulmonary hemorrhage and edema and hepatic atrophy were also observed. In summary, although she could have survived to adulthood despite this syndrome with poor prognosis, she suffered from NASH cirrhosis, which seems to be another complication observed in long-surviving cases, and died from acute chronic liver failure caused by accidentally complicated acute pancreatitis. ROHHAD(NET) syndrome is a very rare disorder associated with a high risk of mortality. In approximately 40% of ROHHAD patients, ganglioneuroma or ganglioneuroblastoma is observed and thus the acronym of the disease was amended to ROHHAD(NET) syndrome in 2008. The etiology of the disease remains unclear and there is still no significant genetic correlation. An autoimmune process or epigenetic disorders are currently considered as possible etiological hypotheses. Only 10 autopsy cases of this syndrome have been reported, and our case is believed to be the second, and hence extremely valuable, adult autopsy case. Each symptom and pathophysiological change of this syndrome seems to closely affect the development of NASH and HCC. Sleep apnea syndrome is considered to be an independent risk factor for NAFLD because it contributes to the progression of NAFLD via oxidative stress, lipid peroxidation, inflammation, and insulin resistance. BiPAP treatments significantly reduce aspartate aminotransferase and ALT levels in obese patients, delay the progression of NAFLD, and demonstrate improvements in metabolic and cardiovascular functions. In this case, although her liver developed decompensated cirrhosis at 17 years old, the early initiation of BiPAP might have contributed to her long-term survival, preventing early death and delaying the progression of hepatic fibrosis. GH deficiency, which was also noted in this case, is associated with NAFLD/NASH, and clinical application of GH and insulin-like growth factor 1 for obesity and liver cirrhosis were trialed in several pilot clinical studies. Additionally, a randomized study demonstrated that GH administration significantly improved the prognosis of patients with chronic liver failure. In our case, although GH supplementation was attempted for 1 year when the patient was 17 years old, the efficacy was unclear. Because over-replacement of GH may conceivably increase cancer risk, we had to discontinue the therapy for fear of promoting the development of buttock and adrenal tumors. Thus, the adequate scheduling and dosing of this therapy are challenging issues. The hypothalamus has critical roles in maintaining metabolic homeostasis. Leptin signaling in the hypothalamus regulates hunger and energy expenditure. In NASH and NAFLD patients, circulating leptin levels are higher than in control subjects and levels are consistent with disease severity. In the current case, leptin levels when the patient was 10 years old were high and might be related to the severity of NAFLD/NASH. Associated with the above, hypothalamic inflammation was also shown to be involved in the regulation of hepatic steatosis. In this case, inflammation of the hypothalamus and pituitary body had been suspected from single photon emission computed tomography data at 12 years old. Furthermore, immunosuppressive therapy was administered for a year. No active inflammation or gliosis was evident among the histological findings at autopsy, while some reported autopsy cases exhibited hypothalamic inflammation histologically. The reason was unclear, although the therapeutic effect or natural improvement due to the long survival were suspected. According to a systematic review, the median age of death for ROHHAD(NET) syndrome is estimated at 4.6 (3–6 years). Another reported case with NASH and HCC was the longest living case at 27 years old; in this case, HCC had developed at age 26. The present case is believed to be the second-longest survival period reported at this time. This extended survival must be examined regarding the development of NASH and HCC. In the other reported ROHHAD(NET) case with HCC, RFA was successfully performed as the therapy and was well treated. As therapy for HCC, TACE and radiation were performed in this case. According to Japanese guidelines for HCC, RFA or resection was recommended for this patient. However, the HCC of this case was located at the liver surface, and percutaneous RFA seemed to be a contraindication. Additionally, general anesthesia was intolerable because of severe obesity, asthma, and the short neck of the patient. Hence, surgical resection, laparoscopic RFA, and transplantation were also contraindications. As a result of TACE and radiation treatment, the tumor had shrunk and was controllable for a year, and the tumor was almost necrotized at autopsy. As described above, in most ROHHAD(NET) syndrome cases, therapeutic modalities requiring general anesthesia are contraindications; therefore, some limitations for therapeutic selection may exist, and screening for HCC seemed to be more important. Jalal Eldin et al recommended screening for hepatic lesions using abdominal US in patients presenting with the clinical features of NAFLD as they grow older, particularly if they have signs of advanced hepatic fibrosis or cirrhosis. In our case, the HCC was detected by CT, and the US screening had not been performed. As a result, the tumor detected by CT was hardly visualized by US because of thick subcutaneal and intravisceral fat. To detect small HCCs in obese patients, CT or MRI is superior to US in general, and in the screening of HCC in ROHHAD(NET) syndrome, US combined with CT and/or MRI might be adequate. The current patient died due to multiorgan failure caused by severe acute pancreatitis subsequent acute on chronic liver failure. There are no reported cases of ROHHAD(NET) developing pancreatitis. Furthermore, she had no hypertriglyceridemia or other risk factors for acute pancreatitis. Autopsy findings showed that the relationship between pancreatitis and this syndrome could not be proved. In conclusion, we described the extremely rare autopsy case of a 21-year-old patient with ROHHAD(NET) syndrome complicated by NASH and HCC. This case was valuable not only for other ROHHAD(NET) syndrome cases, but also in improving our understanding of the natural history of NAFLD, NASH, and HCC. This is just a case report. Further accumulation of the cases, study and investigation with large sample size is needed. The authors thank H. Nikki March, PhD, from Edanz ( https://jp.edanz.com/ac ) for editing a draft of this manuscript. Conceptualization: Satoru Hasuike, Yoshinori Ozono, Yuri Komaki, Kenichi Nakamura, Mitsue Sueta, Misayo Matsuyama, Hirotake Sawada, Hiroshi Kawakami. Writing—original draft: Satoru Hasuike. Data curation: Keisuke Uchida, Souichiro Ogawa, Hotaka Tamura, Naomi Uchiyama, Toyoki Nishimura, Toshiyuki Oguri, Hiroshi Kawakami. Investigation: Keisuke Uchida, Souichiro Ogawa, Hotaka Tamura, Hiroshi Hatada, Yuri Komaki, Kenichi Nakamura, Hiroshi Kawakami. Resources: Naomi Uchiyama. Visualization: Hiroshi Hatada, Toshiyuki Oguri, Yuichiro Sato, Hiroshi Kawakami. Supervision: Hisayoshi Iwakiri, Mitsue Sueta, Kenji Nagata, Misayo Matsuyama, Hirotake Sawada, Toshiyuki Oguri, Yuichiro Sato, Hiroshi Kawakami. Writing—review & editing: Misayo Matsuyama, Hirotake Sawada, Toshiyuki Oguri, Hiroshi Kawakami. |
Ascertaining the Effects of Tissue Sealers on Minor Laparoscopic Procedures between Obstetrics and Gynecology Residents: A Prospective Cohort Study | 89045994-8015-40d0-bb6a-e7f2d1cc6873 | 9147952 | Gynaecology[mh] | During recent times, gynecological surgery and training has experienced many radical changes, including the reduction in operative cases due to the COVID-19 pandemic . For residents in gynecological surgery, this means fewer cases being assigned to the operating room . Basic surgical ability and knowledge can be practiced in simulation laboratories before operating on the patient . However, current residents’ training is basically focused on anatomy and surgical techniques first, meanwhile, the application in the surgical field of the different instrumentation and techniques for coagulation and hemostasis are still less considered. Such issues would increase patient safety. In fact, coagulation, vessel sealing and control of hemostasis are crucial steps for every surgical procedure . For the improvement of safety and precision, over the last two decades, several devices for laparoscopic use have been developed. Basically, these instruments belong to two major categories: bipolar forceps or tissue sealers . Bipolar forceps are considered the basic electrosurgical instruments, not only due to their increased safety profile relative to monopolar electrosurgery, but also due to their feasibility in terms of costs and benefits, since they are reusable and available worldwide . On the other hand, tissue and vessel sealers are updated and safe instruments, since they feature the real-time identification of the tissue to be cut. For this reason, sealers allow a uniform compression of tissues and vessels, essential for satisfying surgical outcomes . With regard to safety, bipolar forceps and tissue sealers were mainly reported comparable, with the possibility of tissue sealers allowing cut and coagulation simultaneously, without the need for switching instruments during the procedure. Tissue/vessel sealers take advantage of the combination of pressure granted by the handpiece, and a source of energy (traditionally radiofrequency or ultrasound), applied to the target tissue for tissue synthesis . Several studies have addressed the impact of these technologies on different kinds of general and specialized surgeries, showing similar data regarding efficacy and safety. However, there is a scant amount of evidence regarding the possible benefits or disadvantages of such categories of devices in gynecologic laparoscopy, although a minimally invasive laparoscopy represents the gold standard approach in over the 70% of procedures for uterine and adnexal benign pathologies . Moreover, the current literature analyzes the advantages and disadvantages of the several hemostatic devices solely for expert surgeons, without considering that the use of tissue sealers or bipolar forceps by resident surgeons might achieve different results. In fact, minor surgeries, including laparoscopic adnexectomy and salpingectomy, are the most common interventions carried out by residents as a first surgeon . Based on this, the aim of this study was to evaluate whether the use of a hemostatic surgical device impacts on the learning curve and surgical outcomes of gynecology residents performing minor laparoscopic procedures. This study was designed as a prospective cohort study conducted at two centers related to the University of Campania “Luigi Vanvitelli” (the Obstetrics and Gynecology Unit of the AOU L. Vanvitelli, Naples, Italy, and the Obstetrics and Gynecology Unit, AORN Sant’Anna e San Sebastiano, Caserta, Italy) between March 2019 and March 2021. The Institutional Review Board (IRB) of the University of Campania “Luigi Vanvitelli” approved the study with protocol no. 712/15-11-19, with date 15 November 2019. All patients signed a written informed consent form describing the surgical approach, common complications, as well as privacy and anonymity protection throughout data collection and analysis. Senior gynecology residents who served as first surgeons during the entire laparoscopic salpingo-oophorectomy or salpingectomy were considered. In order to maintain consistency, junior residents, fellows and consultants were excluded from the analysis. Patients referred from the outpatient gynecological clinics of our unit who, according to national and international treatment guidelines, were eligible for a planned laparoscopic salpingectomy or salpingo-oophorectomy for benign gynecological pathologies were enrolled. The definition of a benign gynecological pathology included an ultrasonographic suspicion of a unilateral tubal, ovarian, or tubo-ovarian pathology, sized between 3 to 7 cm in its largest diameter, without local extension. Confirmation of the benignity of the diseases was achieved through postsurgical histopathological examination. Women who were not suitable for or denied a laparoscopic approach, declined the procedure, did not sign a written informed consent form, were suffering from a gynecologic malignant disease or severe systemic illnesses (i.e., autoimmune or endocrine diseases, severe coagulopathy or cardiac pathology) and for whom procedures were converted to laparotomy, were excluded from the analysis. Consecutively enrolled patients were divided in two main categories depending on which device was chosen for the entire procedure: - Group A: a 5 mm diameter, 35 cm length curved branch radiofrequency tissue sealer connected to a dedicated electric generator (Enseal NSGL2, Ethicon Endo-Surgery, Germany or Ligasure, Medtronic, USA). - Group B: 5 mm diameter, 35 cm classic rotating bipolar forceps (RoBi Kelly, Karl Storz Endoskope, Germany). Therefore, patients from each category were subdivided in accordance with the type of surgical procedure: groups A1 and B1 for laparoscopic salpingo-oophorectomy, and groups A2 and B2 for laparoscopic salpingectomy. Operative procedures were carried out according to the most recent national guidelines and following the good clinical practice of our operative unit. No changes from a routine salpingectomy or salpingo-oophorectomy were actuated. The same set of senior residents acted as operators for both groups A and B. 2.1. Primary and Secondary Outcomes The primary outcomes of interest were regarding the subjective evaluation of the surgical procedure by the resident serving as a first operator, by means of a Numeric Rating Scale (NRS): (i). Vision of the surgical field (considering 0 for “inadequate vision” and 10 for “optimal vision”) (ii). Interpretation of the difficulty of the intervention (considering 0 for “extremely easy” and 10 for “extremely difficult”) (iii). Overall procedural satisfaction (considering 0 for “complicated or incomplete procedure” and 10 for “uncomplicated and satisfying intervention”) Several secondary outcomes concerning intraoperative and postoperative characteristics were investigated: (i). Procedure time, defined as vascular time interval, expressed in minutes, regarding devascularization and removal of the targeted organ. For laparoscopic salpingectomy, the recorded interval included the resection of the isthmic part of the tube and the tubal arterial arch. Regarding salpingo-oophorectomy, in addition to the abovementioned interval, the recorded time included the resection of the infundibulopelvic and the round ligaments. (ii). Postoperative hospitalization days, defined as the number of days between procedure and discharge (iii). Hemoglobin (Hb) percentage variation expressed as the rate between presurgical and first postsurgical Hb. (iv). Intraoperative and postoperative complications (i.e., blood transfusion necessity, laparotomic conversion, infection or other common complications) (v). 24 h postoperative pain, expressed using a 0–10 NRS, in which 0 was intended as “no pain” and 10 was “extremely painful”). Patients from both groups underwent the same postoperative analgesic treatment according to our institution’s protocol. 2.2. Sample Size According to the current literature, assuming an A-priori calculation for the size of sample required to detect a significant difference between the groups, given 80% power and an alpha level of 0.05, was 40 procedures per arm of the study, including a 7% anticipated opt-out rate. 2.3. Statistical Analysis Statistical analysis was conducted using Stata 14.1 (StataCorp, College Station, TX, USA). Normally distribution of data was measured using the Shapiro–Wilk test. Continuous variables were reported as means and standard deviations (SDs). Dichotomous data were reported as the absolute number and percentages. For continuous variables, differences were evaluated by means of the t-test, while differences in the proportions between the groups were analyzed using the Fisher’s exact and Chi-squared test, where appropriate. Association between the instrument used and the outcomes of interest was assessed using a risk ratio (RR) with 95 % confidence intervals (CIs). Statistical significance was set as a p < 0.05. The primary outcomes of interest were regarding the subjective evaluation of the surgical procedure by the resident serving as a first operator, by means of a Numeric Rating Scale (NRS): (i). Vision of the surgical field (considering 0 for “inadequate vision” and 10 for “optimal vision”) (ii). Interpretation of the difficulty of the intervention (considering 0 for “extremely easy” and 10 for “extremely difficult”) (iii). Overall procedural satisfaction (considering 0 for “complicated or incomplete procedure” and 10 for “uncomplicated and satisfying intervention”) Several secondary outcomes concerning intraoperative and postoperative characteristics were investigated: (i). Procedure time, defined as vascular time interval, expressed in minutes, regarding devascularization and removal of the targeted organ. For laparoscopic salpingectomy, the recorded interval included the resection of the isthmic part of the tube and the tubal arterial arch. Regarding salpingo-oophorectomy, in addition to the abovementioned interval, the recorded time included the resection of the infundibulopelvic and the round ligaments. (ii). Postoperative hospitalization days, defined as the number of days between procedure and discharge (iii). Hemoglobin (Hb) percentage variation expressed as the rate between presurgical and first postsurgical Hb. (iv). Intraoperative and postoperative complications (i.e., blood transfusion necessity, laparotomic conversion, infection or other common complications) (v). 24 h postoperative pain, expressed using a 0–10 NRS, in which 0 was intended as “no pain” and 10 was “extremely painful”). Patients from both groups underwent the same postoperative analgesic treatment according to our institution’s protocol. According to the current literature, assuming an A-priori calculation for the size of sample required to detect a significant difference between the groups, given 80% power and an alpha level of 0.05, was 40 procedures per arm of the study, including a 7% anticipated opt-out rate. Statistical analysis was conducted using Stata 14.1 (StataCorp, College Station, TX, USA). Normally distribution of data was measured using the Shapiro–Wilk test. Continuous variables were reported as means and standard deviations (SDs). Dichotomous data were reported as the absolute number and percentages. For continuous variables, differences were evaluated by means of the t-test, while differences in the proportions between the groups were analyzed using the Fisher’s exact and Chi-squared test, where appropriate. Association between the instrument used and the outcomes of interest was assessed using a risk ratio (RR) with 95 % confidence intervals (CIs). Statistical significance was set as a p < 0.05. A total of 80 minor surgical procedures matched the inclusion criteria and were evaluated. Of those, 40 were laparoscopic salpingo-oophorectomies: 21 were carried out with a tissue sealer (group A1) while 19 were carried out with bipolar forceps (B1). Forty salpingectomies were undertaken: 20 with a tissue sealer (A2) and 20 with bipolar forceps (B2) . There were no differences concerning the largest diameter of the lesion between groups A and B (4.8 ± 2.1 cm vs. 5.4 ± 1.7 cm; p = 0.66), and there was no mass extending outside the ovary or the tube. 3.1. Visualization and Satisfaction Concerning the salpingo-oophorectomies, residents’ judgment underlined an enhanced visibility of the surgical field with the use of a tissue sealer rather than bipolar forceps (8.4 ± 0.8 vs. 7.3 ± 0.9; p = 0.03). The intervention was judged to be easier with a radiofrequency device than with bipolar forceps (5.4 ± 1.2 vs. 7.0 ± 1.4; p = 0.02). Moreover, an improved overall satisfaction was reported by residents in the tissue sealer group, than in the bipolar forceps group (9.2 ± 0.4 vs. 7.6 ± 1.0; p = 0.02). Consequently, the mean procedure time was reduced with the use of a tissue sealer (7.8 ± 3.4 vs. 12.6 ± 3.1; p = 0.01) . Compared to standard bipolar forceps, the use of a tissue sealer in salpingo-oophorectomies showed a reduced RR for procedure time over 10 min (RR 0.36; 95% CI 0.17 to 0.73; p = 0.01), a reduced RR for visibility of the surgical field lower than six points (RR 0.45; 95% CI 0.18 to 0.87; p = 0.03), a reduced risk of a difficulty score over six points (RR 0.40; 95% CI 0.21 to 0.67; p = 0.02) and a reduced RR for overall satisfaction under six points (RR 0.33; 05% CI 0.19 to 0.74; p = 0.02). With regard to the salpingectomies, a statistically significant reduction in the procedure duration was notable while using a tissue sealer rather than bipolar forceps (7.2 ± 3.4 min vs. 13.8 ± 2.2 min; p = 0.02). When a tissue sealer was used, enhanced visibility of the surgical field was achieved (8.1 ± 1.1 vs. 6.7 ± 1.4; p = 0.01), together with a reduction in procedural difficulty (6.5 ± 1.1 vs. 7.5 ± 0.9; p = 0.04) and a better overall operator satisfaction (9.3 ± 0.5 vs. 7.5 ± 0.6; p = 0.01) was observed . Relative to standard bipolar forceps, the use of a tissue sealer in laparoscopic salpingectomies showed a reduced RR for an overall procedure time longer than 10 min (RR 0.56; 95% CI 0.22 to 0.81; p = 0.02), a reduced RR for vision of the surgical field with a score under six points (RR 0.21; 95% CI 0.14 to 0.49; p = 0.01), reduced risk of a difficulty score over six points (RR 0.84; 95% CI 0.76 to 0.91; p = 0.04) and a reduced RR for an overall satisfaction score of less than six points (RR 0.30; 05% CI 0.15 to 0.55; p = 0.01). 3.2. Surgical Outcomes Comparing tissue sealer-based to bipolar-forceps-based salpingo-oophorectomy, a significant reduction in intraoperative blood loss was found (12.2± 4.7 mL vs. 33.2 ± 9.7 mL; p = 0.01). Postoperative pain was lower with tissue sealer usage (4.5 ± 1.1 vs. 5.7 ± 1.8; p = 0.03). No other differences were observed between the intraoperative data. For each group, there were no surgical complications or re-interventions . Compared with standard bipolar-based salpingo-oophorectomy, using a tissue sealer was related to a reduced risk of intraoperative blood loss over 20 mL (RR 0.54; 95% CI 0.21 to 0.76; p = 0.01) and a reduced risk of postoperative pain over six points after 24 h (RR 0.67; 95% CI 0.51 to 0.89; p = 0.02) There were no differences regarding the percentage of Hb loss after 24 h over 7% (RR 0.90; 95% CI 0.71 to 1.15; p = 0.24) and the number of post-surgical complications (RR 1.15; 95% CI 0.15 to 4.55; p = 0.89). The evaluation of laparoscopic salpingectomies reported that the use of a tissue sealer had a reduced percentage of Hb loss 24 h after the intervention (8.1 ± 4.2 vs. 4.5 ± 1.1%; p = 0.02). Moreover, postoperative pain at 24 h was reported to be higher in procedures carried out with bipolar forceps (5.1 ± 0.9 vs. 4.1 ± 0.8; p = 0.03). The other investigated outcomes were comparable between the two groups . Compared to a standard approach using bipolar forceps, tissue-sealer based salpingo-oophorectomies performed by residents showed a reduced risk of the percentage of Hb loss after 24 h being over 7% (RR 0.78; 95% CI 0.65 to 0.90; p = 0.02), and a reduced RR for postoperative pain over six points after 24 h (RR 0.69; 95% CI 0.58 to 0.84; p = 0.02). No differences were reported concerning intraoperative blood loss over 20 mL (RR 0.54; 95% CI 0.21 to 0.76; p = 0.01) and intraoperative complications (RR 0.92; 95% CI 0.21 to 3.77; p = 0.94). Concerning the salpingo-oophorectomies, residents’ judgment underlined an enhanced visibility of the surgical field with the use of a tissue sealer rather than bipolar forceps (8.4 ± 0.8 vs. 7.3 ± 0.9; p = 0.03). The intervention was judged to be easier with a radiofrequency device than with bipolar forceps (5.4 ± 1.2 vs. 7.0 ± 1.4; p = 0.02). Moreover, an improved overall satisfaction was reported by residents in the tissue sealer group, than in the bipolar forceps group (9.2 ± 0.4 vs. 7.6 ± 1.0; p = 0.02). Consequently, the mean procedure time was reduced with the use of a tissue sealer (7.8 ± 3.4 vs. 12.6 ± 3.1; p = 0.01) . Compared to standard bipolar forceps, the use of a tissue sealer in salpingo-oophorectomies showed a reduced RR for procedure time over 10 min (RR 0.36; 95% CI 0.17 to 0.73; p = 0.01), a reduced RR for visibility of the surgical field lower than six points (RR 0.45; 95% CI 0.18 to 0.87; p = 0.03), a reduced risk of a difficulty score over six points (RR 0.40; 95% CI 0.21 to 0.67; p = 0.02) and a reduced RR for overall satisfaction under six points (RR 0.33; 05% CI 0.19 to 0.74; p = 0.02). With regard to the salpingectomies, a statistically significant reduction in the procedure duration was notable while using a tissue sealer rather than bipolar forceps (7.2 ± 3.4 min vs. 13.8 ± 2.2 min; p = 0.02). When a tissue sealer was used, enhanced visibility of the surgical field was achieved (8.1 ± 1.1 vs. 6.7 ± 1.4; p = 0.01), together with a reduction in procedural difficulty (6.5 ± 1.1 vs. 7.5 ± 0.9; p = 0.04) and a better overall operator satisfaction (9.3 ± 0.5 vs. 7.5 ± 0.6; p = 0.01) was observed . Relative to standard bipolar forceps, the use of a tissue sealer in laparoscopic salpingectomies showed a reduced RR for an overall procedure time longer than 10 min (RR 0.56; 95% CI 0.22 to 0.81; p = 0.02), a reduced RR for vision of the surgical field with a score under six points (RR 0.21; 95% CI 0.14 to 0.49; p = 0.01), reduced risk of a difficulty score over six points (RR 0.84; 95% CI 0.76 to 0.91; p = 0.04) and a reduced RR for an overall satisfaction score of less than six points (RR 0.30; 05% CI 0.15 to 0.55; p = 0.01). Comparing tissue sealer-based to bipolar-forceps-based salpingo-oophorectomy, a significant reduction in intraoperative blood loss was found (12.2± 4.7 mL vs. 33.2 ± 9.7 mL; p = 0.01). Postoperative pain was lower with tissue sealer usage (4.5 ± 1.1 vs. 5.7 ± 1.8; p = 0.03). No other differences were observed between the intraoperative data. For each group, there were no surgical complications or re-interventions . Compared with standard bipolar-based salpingo-oophorectomy, using a tissue sealer was related to a reduced risk of intraoperative blood loss over 20 mL (RR 0.54; 95% CI 0.21 to 0.76; p = 0.01) and a reduced risk of postoperative pain over six points after 24 h (RR 0.67; 95% CI 0.51 to 0.89; p = 0.02) There were no differences regarding the percentage of Hb loss after 24 h over 7% (RR 0.90; 95% CI 0.71 to 1.15; p = 0.24) and the number of post-surgical complications (RR 1.15; 95% CI 0.15 to 4.55; p = 0.89). The evaluation of laparoscopic salpingectomies reported that the use of a tissue sealer had a reduced percentage of Hb loss 24 h after the intervention (8.1 ± 4.2 vs. 4.5 ± 1.1%; p = 0.02). Moreover, postoperative pain at 24 h was reported to be higher in procedures carried out with bipolar forceps (5.1 ± 0.9 vs. 4.1 ± 0.8; p = 0.03). The other investigated outcomes were comparable between the two groups . Compared to a standard approach using bipolar forceps, tissue-sealer based salpingo-oophorectomies performed by residents showed a reduced risk of the percentage of Hb loss after 24 h being over 7% (RR 0.78; 95% CI 0.65 to 0.90; p = 0.02), and a reduced RR for postoperative pain over six points after 24 h (RR 0.69; 95% CI 0.58 to 0.84; p = 0.02). No differences were reported concerning intraoperative blood loss over 20 mL (RR 0.54; 95% CI 0.21 to 0.76; p = 0.01) and intraoperative complications (RR 0.92; 95% CI 0.21 to 3.77; p = 0.94). This study on consecutive minor gynecologic laparoscopic procedures carried out by senior residents suggests that the use of a radiofrequency tissue sealer, instead of bipolar forceps, was associated with increased satisfaction, better visibility and a reduced procedure time. Moreover, intraoperative and postoperative blood losses were significantly reduced with the use of a tissue sealer. Minimally invasive surgery involves less surgical trauma, faster hospital discharge and decreased postoperative pain, with less time needed for returning to a normal life routine . It is essential to teach and train laparoscopy in every surgical residency, especially in gynecological surgery, in which a less invasive approach significantly improves the postsurgical quality of life of both fertile-age and postmenopausal women . Several lines of evidence showed that a standardized, step-by-step, learning program approach is mandatory to achieve basic surgical skills . The early phase of the laparoscopy learning curve can be achieved in a simulation laboratory, allowing for teaching and practice in a safe, systematic and unharmful environment . However, in the second path of the learning curve, the training field is switched into the operating room. In such a scenario, choosing the right instrumentation is a crucial step to improve the surgical skills of young surgeons . When a tissue sealer was used, the residents not only predictably reported improved surgical outcomes and an enhanced procedure time, but overall satisfaction improved and surgical field visibility was also easier during the procedure itself. It should be noted that a comfortable environment enhances both the learning path and the intraoperative and postoperative outcomes. This study shows several strengths. First, it gives insights about the perception of surgical field vision and the satisfaction of gynecology residents, which is important in the assessment of a global, standardized, surgical skill program that needs to also involve the resident surgeons in the didactic path, in order to maximize the learning program and simplify the laparoscopic learning curve. Moreover, the prospective analysis of the surgical procedures represents the best available option to reduce selection biases, since, due to ethical constraints, it was not possible to randomize the instrumentation (tissue sealer or bipolar forceps) to be used during the procedures. Conversely, this study has several limitations. The first is related to the small number of procedures available for analysis. For this reason, a sub-analysis concerning residents age, sex and skill status of the investigated outcomes was not performed. In addition, in our institution, salpingo-oophorectomies and salpingectomies are exclusively performed by gynecologists, avoiding the possibility of performing comparisons between gynecological and general surgical or other surgical residencies. The use of tissue sealers rather than standard bipolar forceps during laparoscopic salpingectomies or salpingo-oophorectomies, by gynecology residents as first surgeons, was related to reduced difficulty, as well as improved visibility and overall satisfaction. Moreover, there was there was significantly less post-surgical pain and blood loss in the tissue sealer group than in the bipolar forceps group. Choosing the adequate instrumentation during the surgical learning path should be considered a critical step for the surgical training. |
Evaluation of Four Forensic Investigative Genetic Genealogy Analysis Approaches with Decreased Numbers of SNPs and Increased Genotyping Errors | 3b22d9a4-fe1b-4166-aca5-3edd98ef72f5 | 11507463 | Forensic Medicine[mh] | Two individuals who can be traced back (finite generations) to a common ancestor are related and are expected to share the same copies of the common ancestral DNA sequences, which are known as identical by descent (IBD) segments. In general, the more distant the relationship, the shorter the IBD segments they share. Hence, a relationship can be inferred from the IBD segments detected . Based on this principle, forensic investigative genetic genealogy (FIGG), also known as long-range familial search, has emerged in recent years. It is considered a subdiscipline involving forensic genetics, genealogy, and bioinformatics and differs greatly from the traditional kinship inference strategy in forensic genetics. Conventionally, a relationship is tested using dozens of short tandem repeats (STRs), which are obtained with capillary electrophoresis and then analyzed using likelihood ratio (LR) methods . Due to the low power of common forensic STRs, relationships can only be reliably identified up to a second-degree relationship (e.g., uncle–nephew) . In contrast, FIGG is based on dense single-nucleotide polymorphisms (SNPs), typically over 600,000, obtained from whole-genome sequencing or high-density microarrays, and can identify relatives as distant as those with a seventh-degree relationship (e.g., third cousins) . The exploratory approaches are frequently used in FIGG and can be subdivided into the method of moment (MoM) and the IBD segment-based methods . MoM estimates the coefficients of pairwise relatedness, such as the kinship coefficient (θ) and Cotterman coefficients, based on the observed identical by state (IBS) of the genetic markers , while the IBD segment-based methods infer relationships by identifying IBD segments between two individuals. Depending on whether alleles are assigned to paternal or maternal chromosomes, the IBD segment-based methods can be subdivided into two types: phased and phase-free . With phased genotyping data, if two fragments are identical at multiple continuous markers so that the probability of a random match is extremely low, the two fragments can be considered IBD segments , whereas with phased-free genotyping data, an IBD segment is assigned if two individuals share at least one half-genotype at multiple continuous markers and the length is longer than a certain threshold . MoM is robust and computationally efficient, such as PLINK and KING , while those based on the IBD segment-based methods are better for identifying distant relatives , such as IBIS and GERMLINE . However, in the face of complicated crime scene samples, deciding which approach to use remains a troubling problem for investigators. Previous studies have shown that some approaches performed unsatisfactorily for samples of suboptimal quality and quantity, where null alleles and genotyping errors occur frequently, and as a result, relationships were misclassified . Therefore, selecting appropriate approaches can improve not only the efficiency but also the robustness of kinship inference. It is necessary to answer three questions before selection: (i) How many SNPs are required for each approach to provide effective information for kinship inference? (ii) What is the upper limit of an approach in terms of genotyping errors? More importantly, (iii) how can we improve the robustness using these existing approaches? In this study, we selected four existing approaches and evaluated their performance in terms of decreased numbers of SNPs and increased genotyping error rates, aiming to help the selection of appropriate approaches in FIGG. The four selected approaches were KING , a common MoM estimator; IBIS , a popular phase-free IBD segment-based tool; TRUFFLE , a phase-free IBD segment-based tool, which embeds a model to deal with genotyping errors; and GERMLINE , a typical phased IBD segment-based tool. We first evaluated the performance of these tools using simulated datasets with decreasing numbers of SNPs from 5 million to 5000 and increasing genotyping error rates from 0.1% to 10%. Then, we explored the possibility of making robust kinship inferences for samples with ultra-high genotyping errors by integrating the MoM and IBD segment-based methods. Finally, we tested the performance on real diluted and degraded DNA samples. 2.1. Simulation The haplotype data of 208 unrelated individuals were obtained from the 1000 Genomes Project (GRCh37) , encompassing 103 Han Chinese individuals in Beijing (CHB) and 105 Southern Han Chinese (CHS) individuals. PLINK was then employed for SNP filtering. Specifically, only bi-allelic SNPs with a minor allele frequency (MAF) over 0.05 were included, and SNPs on the X chromosome, Y chromosome, and mitochondrial DNA were excluded. After filtering, 5,265,508 SNPs were retained, hereafter referred to as 5265 K. The 5265 K panel was then used as a reference dataset. Finally, pedigrees shown in were simulated by Ped-sim using the sex-average genetic map created by Bhérer et al. All the simulations were performed with default parameters unless otherwise specified. Since 11 founders (the individuals in gray shown in ) are required for each simulation, a maximum of 18 families could be created based on the 208 individuals. Therefore, we repeated the simulation 10 times by changing the seeds and finally obtained a total of 180 families, which consisted of lineal relatives as well as full and half collateral relatives. In each simulated family, 30 pairs were extracted so that first- to seventh-degree relatives and unrelated pairs were included. In total, there were 540, 900, 1080, 720, 720, 540, and 360 pairs of first- to seventh-degree relatives, as well as 540 unrelated pairs. Throughout this study, inbred relationships were not considered. Considering that the effective number of SNPs may change if we use different panels, different volumes of sequencing data, or samples of different quality and/or quantity, a series of subsets of the 5265 K panel were randomly selected, including 2633 K, 1316 K, 658 K, 329 K, 164 K, 82 K, 41 K, 20 K, 10 K, and 5 K. These subsets were used to determine a minimum panel, that is, a minimum number of SNPs that still had sufficient information for kinship inference. Then, based on the minimum panel, five more datasets with genotyping errors of 0.1%, 0.5%, 1%, 5%, and 10% were simulated using Ped-sim. 2.2. Mock Challenging Samples With informed consent, the whole blood samples of six individuals from a Han Chinese family were collected. The relationships had been confirmed using STR genotype data in our previous study . With these family members, we had 4, 2, 1, 2, 3, 2, and 1 pairs of first- to seventh-degree relatives, respectively. DNA were then extracted using the QIAamp ® DNA Investigator Kit (Qiagen, Hilden, Germany) and quantified using the Qubit ® 3.0 fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) with the Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. To mock low-copy DNA samples, each DNA sample was diluted with ATE buffer, thus resulting in four samples with total DNA of 10 ng, 1 ng, 0.5 ng, and 0.1 ng. To mock degraded DNA samples, 100 ng of each DNA sample was fragmented using a Covaris M220 Focused-ultrasonicator ™ following the manufacturer’s recommendation. By applying different parameters, we had four degraded DNA samples with average fragment sizes of 1500 bp, 800 bp, 400 bp, and 150 bp, respectively. Finally, six intact, 24 diluted, and 24 degraded DNA samples were genotyped using the Infinium Asian Screening Array (ASA, Illumina, San Diego, CA, USA). This panel consists of ~650 K SNPs specifically selected from East Asian populations. Genotyping data were then refined using VCFtools , i.e., bi-allelic SNPs with an MAF above 0.05 and Hardy–Weinberg equilibrium p -values above 0.000001 were used as inclusion criteria. 2.3. Kinship Inference Four representative approaches for kinship inference––one MoM estimator, KING , and three IBD segment-based software tools, IBIS , TRUFFLE , and GERMLINE —were employed. To facilitate the comparison, the kinship coefficient (θ) was used to determine the degree of relatedness. We expanded the empirical criteria described before to seventh-degree relationships and defined more distant relatives as unrelated pairs . Kinship coefficients were estimated using the four tools as follows: (1) KING estimates the proportion of IBD segments based on the IBS status of a large number of genetic markers. Since there is no need to specify allele frequency, genotyping data are directly input, and KING directly outputs the kinship coefficient, hereafter referred to as θ K I N G . (2) As a phase-free IBD segment detection tool, IBIS identifies IBD segments based on IBS status. The minimum length and number of markers required for defining a segment as IBD were set as 2 cM and 186, respectively. In addition, genetic positions were interpolated before input based on the sex-average genetic map . Although IBIS was used for IBD detection, it also outputs kinship coefficients directly (using the flag-printCoef), hereinafter referred to as θ I B I S . (3) Similar to IBIS, TRUFFLE also estimates IBD segments based on IBS status. There are two types of IBD segments, IBD1 and IBD2. The IBD1 segment refers to haploid matches between any pair of individuals where only a pair of haplotypes are involved, while IBD2 segments are diploid matches where both haplotypes of a pair of individuals match . Two criteria are required for defining a segment as IBD: (i) a probability of match at continuous markers that is too low to be caused by random similarity and (ii) a length of a segment that is long enough to be less likely caused by linkage disequilibrium. Here, the two thresholds were set as 10 −8 for both IBD1 and IBD2 and as 5 Mb for IBD1 and 2 Mb for IBD2. Importantly, a built-in error model is implemented in TRUFFLE, which can correct potential genotyping errors. TRUFFLE outputs both IBD1 and IBD2 segments, and with the two types of IBD segments, we could easily calculate three Cotterman coefficients ( κ 0 , κ 1 , κ 2 ) as: κ 1 = L ( IBD 1 ) L ( genome ) κ 2 = L ( IBD 2 ) L ( genome ) κ 0 = 1 − κ 1 − κ 2 , where L ( IBD 1 ) and L ( IBD 2 ) are the lengths of IBD1 and IBD2 segments summed across all autosomes and L ( genome ) is the total genetic length (approximately 3346.30 cM, according to the genetic map we adopted in this study). Then, the kinship coefficient was calculated as: θ = κ 2 2 + κ 1 4 , hereafter referred to as θ T R U F F L E . (4) Since the IBD detection of GERMLINE is based on haplotype, genotype data have to be phased before inputting. In this study, phasing was performed using SHAPEIT v2 with the sex-average genetic map . The minimum length for identifying a segment as IBD was set to 3 cM, and ultimately, GERMLINE outputs all IBD segment information. Then, the Cotterman coefficients ( κ 0 , κ 1 , κ 2 ) and the kinship coefficient could be calculated as described above, hereafter referred to as θ G E R M L I N E . To evaluate and compare the performance of different approaches, four indicators were calculated: overlapping rate, sensitivity (Sen), positive predictive value (PPV), and accuracy (AC). The overlapping rate is the proportion of pairs whose estimated kinship coefficients are within the reference range intervals estimated based on the reference panel. Sen was defined as the proportion of known relationships that are correctly inferred, and PPV was defined as the proportion of inferred relationships that are correctly assigned. Accuracy (AC) was defined as the proportion of all known relationships that are correctly inferred. All of the following analyses, including figure generation, were performed in R software v.4.0.3. The haplotype data of 208 unrelated individuals were obtained from the 1000 Genomes Project (GRCh37) , encompassing 103 Han Chinese individuals in Beijing (CHB) and 105 Southern Han Chinese (CHS) individuals. PLINK was then employed for SNP filtering. Specifically, only bi-allelic SNPs with a minor allele frequency (MAF) over 0.05 were included, and SNPs on the X chromosome, Y chromosome, and mitochondrial DNA were excluded. After filtering, 5,265,508 SNPs were retained, hereafter referred to as 5265 K. The 5265 K panel was then used as a reference dataset. Finally, pedigrees shown in were simulated by Ped-sim using the sex-average genetic map created by Bhérer et al. All the simulations were performed with default parameters unless otherwise specified. Since 11 founders (the individuals in gray shown in ) are required for each simulation, a maximum of 18 families could be created based on the 208 individuals. Therefore, we repeated the simulation 10 times by changing the seeds and finally obtained a total of 180 families, which consisted of lineal relatives as well as full and half collateral relatives. In each simulated family, 30 pairs were extracted so that first- to seventh-degree relatives and unrelated pairs were included. In total, there were 540, 900, 1080, 720, 720, 540, and 360 pairs of first- to seventh-degree relatives, as well as 540 unrelated pairs. Throughout this study, inbred relationships were not considered. Considering that the effective number of SNPs may change if we use different panels, different volumes of sequencing data, or samples of different quality and/or quantity, a series of subsets of the 5265 K panel were randomly selected, including 2633 K, 1316 K, 658 K, 329 K, 164 K, 82 K, 41 K, 20 K, 10 K, and 5 K. These subsets were used to determine a minimum panel, that is, a minimum number of SNPs that still had sufficient information for kinship inference. Then, based on the minimum panel, five more datasets with genotyping errors of 0.1%, 0.5%, 1%, 5%, and 10% were simulated using Ped-sim. With informed consent, the whole blood samples of six individuals from a Han Chinese family were collected. The relationships had been confirmed using STR genotype data in our previous study . With these family members, we had 4, 2, 1, 2, 3, 2, and 1 pairs of first- to seventh-degree relatives, respectively. DNA were then extracted using the QIAamp ® DNA Investigator Kit (Qiagen, Hilden, Germany) and quantified using the Qubit ® 3.0 fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) with the Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. To mock low-copy DNA samples, each DNA sample was diluted with ATE buffer, thus resulting in four samples with total DNA of 10 ng, 1 ng, 0.5 ng, and 0.1 ng. To mock degraded DNA samples, 100 ng of each DNA sample was fragmented using a Covaris M220 Focused-ultrasonicator ™ following the manufacturer’s recommendation. By applying different parameters, we had four degraded DNA samples with average fragment sizes of 1500 bp, 800 bp, 400 bp, and 150 bp, respectively. Finally, six intact, 24 diluted, and 24 degraded DNA samples were genotyped using the Infinium Asian Screening Array (ASA, Illumina, San Diego, CA, USA). This panel consists of ~650 K SNPs specifically selected from East Asian populations. Genotyping data were then refined using VCFtools , i.e., bi-allelic SNPs with an MAF above 0.05 and Hardy–Weinberg equilibrium p -values above 0.000001 were used as inclusion criteria. Four representative approaches for kinship inference––one MoM estimator, KING , and three IBD segment-based software tools, IBIS , TRUFFLE , and GERMLINE —were employed. To facilitate the comparison, the kinship coefficient (θ) was used to determine the degree of relatedness. We expanded the empirical criteria described before to seventh-degree relationships and defined more distant relatives as unrelated pairs . Kinship coefficients were estimated using the four tools as follows: (1) KING estimates the proportion of IBD segments based on the IBS status of a large number of genetic markers. Since there is no need to specify allele frequency, genotyping data are directly input, and KING directly outputs the kinship coefficient, hereafter referred to as θ K I N G . (2) As a phase-free IBD segment detection tool, IBIS identifies IBD segments based on IBS status. The minimum length and number of markers required for defining a segment as IBD were set as 2 cM and 186, respectively. In addition, genetic positions were interpolated before input based on the sex-average genetic map . Although IBIS was used for IBD detection, it also outputs kinship coefficients directly (using the flag-printCoef), hereinafter referred to as θ I B I S . (3) Similar to IBIS, TRUFFLE also estimates IBD segments based on IBS status. There are two types of IBD segments, IBD1 and IBD2. The IBD1 segment refers to haploid matches between any pair of individuals where only a pair of haplotypes are involved, while IBD2 segments are diploid matches where both haplotypes of a pair of individuals match . Two criteria are required for defining a segment as IBD: (i) a probability of match at continuous markers that is too low to be caused by random similarity and (ii) a length of a segment that is long enough to be less likely caused by linkage disequilibrium. Here, the two thresholds were set as 10 −8 for both IBD1 and IBD2 and as 5 Mb for IBD1 and 2 Mb for IBD2. Importantly, a built-in error model is implemented in TRUFFLE, which can correct potential genotyping errors. TRUFFLE outputs both IBD1 and IBD2 segments, and with the two types of IBD segments, we could easily calculate three Cotterman coefficients ( κ 0 , κ 1 , κ 2 ) as: κ 1 = L ( IBD 1 ) L ( genome ) κ 2 = L ( IBD 2 ) L ( genome ) κ 0 = 1 − κ 1 − κ 2 , where L ( IBD 1 ) and L ( IBD 2 ) are the lengths of IBD1 and IBD2 segments summed across all autosomes and L ( genome ) is the total genetic length (approximately 3346.30 cM, according to the genetic map we adopted in this study). Then, the kinship coefficient was calculated as: θ = κ 2 2 + κ 1 4 , hereafter referred to as θ T R U F F L E . (4) Since the IBD detection of GERMLINE is based on haplotype, genotype data have to be phased before inputting. In this study, phasing was performed using SHAPEIT v2 with the sex-average genetic map . The minimum length for identifying a segment as IBD was set to 3 cM, and ultimately, GERMLINE outputs all IBD segment information. Then, the Cotterman coefficients ( κ 0 , κ 1 , κ 2 ) and the kinship coefficient could be calculated as described above, hereafter referred to as θ G E R M L I N E . To evaluate and compare the performance of different approaches, four indicators were calculated: overlapping rate, sensitivity (Sen), positive predictive value (PPV), and accuracy (AC). The overlapping rate is the proportion of pairs whose estimated kinship coefficients are within the reference range intervals estimated based on the reference panel. Sen was defined as the proportion of known relationships that are correctly inferred, and PPV was defined as the proportion of inferred relationships that are correctly assigned. Accuracy (AC) was defined as the proportion of all known relationships that are correctly inferred. All of the following analyses, including figure generation, were performed in R software v.4.0.3. 3.1. Performance Using Panels with Different Numbers of SNPs shows the distribution of estimated θ values for different panels and approaches. As expected, kinship coefficients for all the relationships fluctuated around the theoretical values , but the four approaches performed differently. Specifically, θ K I N G showed little change compared with that of the 5265 K panel, even at the smallest number of SNPs (5 K), regardless of increased variations. In contrast, although θ I B I S , θ T R U F F L E , and θ G E R M L I N E remained largely unchanged with above 164 K SNPs, they showed larger variations and gradually shifted to the left with fewer SNPs. As a result, the kinship coefficients estimated by IBIS, TRUFFLE, and GERMLINE overlapped and became difficult to distinguish for different relationships. Notably, the value of θ K I N G may be less than 0, while there is a lower limit (0) for the values of θ I B I S , θ T R U F F L E , and θ G E R M L I N E since they are converted from the length of IBD segments. To quantitively evaluate the performance of the four approaches with these virtual panels (ranging from 5265 K to 5 K), the overlapping rate, Sen, PPV, and AC were assessed, and the results are shown in . These four indicators showed a downward trend as the number of SNPs decreased but differed for different approaches. For KING, it had low Sen and PPV for fourth-degree or more distant relationships when large numbers of SNPs were used, thus resulting in low overall accuracy. However, the four indicators all decreased slowly and gradually with decreasing SNPs, indicating that KING is robust in decreasing SNP numbers. In contrast, IBIS, TRUFFLE, and GERMLINE all showed a sharp turning point, except for a few interesting changes (explained below). Specifically, the turning points of both Sen and PPV were observed at 82 K for first- and second-degree relationships and at 164 K for third- to seventh-degree relationships, suggesting that 164 K may be a minimum number for effective kinship inference. With respect to overlapping rates, when IBIS and GERMLINE were used, there were also turning points at 164 K for first- to fifth-degree relationships, but the overlapping rates remained unchanged for sixth- and seventh-degree relationships, as well as for unrelated pairs. The latter can be explained by the small θ values and large overlapping areas for these relationships. For the three types of relationships, the lower bounds were all 0, and they had similar upper bounds regardless of the number of SNPs , thus resulting in overlapping rates all close to 1 . Similar results were observed for first- to seventh-degree relationships when TRUFFLE was employed. However, an unexpected sharp decrease and then an increase were observed at 164 K for unrelated pairs. Indeed, for each relationship, θ T R U F F L E was slightly underestimated compared to the theoretical values when using the 5265 K panel, and when using the 164 K panel, θ T R U F F L E was distributed as expected . We speculated that when the number of SNPs was redundant, the underlying linkage disequilibrium may cause θ to be overestimated. This is why TRUFFLE performed best with the 164 K panel rather than with the 5265 K panel ( D). In fact, the authors of TRUFFLE may have considered this problem and recommend pruning markers to 100–500 K before inputting. “ https://adimitromanolakis.github.io/truffle-website/index.html (accessed on 10 November 2023)”. On the whole, when the number of SNPs was sufficient (more than 164 K), IBIS performed best, with an AC equal to 0.796–0.772, followed by GERMLINE (AC = 0.774–0.733), TRUFFLE (AC = 0.749–0.714), and KING (AC = 0.709–0.691). Therefore, IBIS shows superiority over the other three methods, especially for the inference of distant relationships. When the number of SNPs was approximately 82 K, all the approaches performed comparably, with TRUFFLE reaching the best AC (0.712). However, when the number of SNPs was even smaller, only KING (AC = 0.672–0.573) made relatively effective inferences. In summary, decreasing the number of SNPs has little effect on kinship inference for both the MoM and IBD segment-based methods when the number of SNPs is more than 164 K, at which point overlapping rates for all relationships are all over 0.99 for each approach (except for unrelated pairs using TRUFFLE). The effect becomes non-negligible for IBD segment-based methods when the number of SNPs is below 164 K. Therefore, we considered the 164 K subset as the minimum panel for kinship inference and employed it for genotyping error analyses. 3.2. Performance Using Panels at Different Levels of Genotyping Error As shown in , the estimated kinship coefficients changed accordingly with increasing genotyping errors, and a high genotyping error rate caused a significant reduction in θ values. We found that θ K I N G , θ I B I S , θ T R U F F L E , and θ G E R M L I N E remained essentially unchanged at low error rates (0.1%), whereas the four tools performed differently when genotyping errors increased. Specifically, θ K I N G values of first- to fourth-degree relationships tended to decrease slightly, while those of fifth-degree or more distant relationships (including unrelated pairs) tended to increase. Interestingly, reduced variations were observed when the error rate exceeded 1%. In contrast, θ I B I S , θ T R U F F L E , and θ G E R M L I N E all tended to decrease when the error rates exceeded 0.1%, 1%, and 0.1%, respectively. Furthermore, the higher the genotyping error rate, the bigger the difference in kinship coefficients for the four approaches. By leveraging this phenomenon, we attempted to correct the reduction in kinship coefficients caused by genotyping errors. First, we averaged the θ values estimated by the four approaches when the genotyping error rate was 0 and took it (named “expected kinship coefficient”) as the dependent variable. Then, 13 variables were constructed, including θ KING , θ IBIS , θ TRUFFLE , θ GERMLINE , θ Δ ( KING , IBIS ) , θ Δ ( KING , TRUFFLE ) , θ Δ ( KING , GERMLINE ) , θ Δ ( IBIS , TRUFFLE ) , θ Δ ( IBIS , GERMLINE ) , θ Δ ( TRUFFLE , GERMLINE ) , θ ( KING / IBIS ) , θ ( TRUFFLE / IBIS ) , and θ ( GERMLINE / IBIS ) , and stepwise multiple linear regression was performed with a 10-fold cross-validation using the packages caret , leaps , and MASS in R. Aside from the four parameters θ KING , θ IBIS , θ TRUFFLE , θ GERMLINE , other variables were constructed in order to measure the difference in output values between these approaches: θ Δ ( A , B ) = θ A − θ B θ ( A / B ) = θ A ÷ θ B , where A and B each represent one of the four approaches. Note that since IBIS calculates θ with a supplemental kinship coefficient factor (default = 0.00138), only θ IBIS will not be 0, and is therefore considered the denominator when calculating θ ( A / B ) . As shown in , several models were constructed, and we found that the model M7 fit the best, hereafter referred to as the combination. This model was quite robust regarding genotyping errors, and the estimated kinship coefficients were maintained throughout the error ranges . As shown in , the θ combination values were closer to the expected values than those of any single approach alone, especially when the genotyping error rate exceeded 1%. Given this, integrating MoM and IBD segment-based methods has the potential to improve the tolerance to genotyping errors. Again, in order to quantitively evaluate performance in the presence of genotyping error for the four tools and our newly established model, the overlapping rate (treating 164 K SNP panel with no genotyping errors as the reference panel), Sen, PPV, and AC were assessed. As shown in , these four indicators all remained stable when the error rate was low. When the genotyping error rate increased further, these four indicators of KING and the combination method remained stable, while those of IBIS, TRUFFLE, and GERMLINE showed a sharp decrease at the 0.05 error rate, indicating that KING and the combination method are more robust to genotyping error. Notably, Sen decreased gradually with increasing genotyping error rates for first- to third-degree as well as unrelated relationships but increased for fourth- to seventh-degree relationships. This was consistent with the distributions of θ K I N G in , which showed narrow variations and were slightly overestimated with high genotyping error. A similar explanation can also be applied for the trends of overlapping rates, which remained close to 1 as the genotyping error rate increased, except for first-degree pairs with KING. In addition, the overlapping rates for IBIS, TRUFFLE, and GERMLINE also remained close to 1 for distant relationships. This could be explained by a failure of IBD segment detection and a large proportion of θ equal to the lower limit (0). In terms of accuracy, when the genotyping error rate was relatively low, IBIS outperformed the other three tools. When the error rates increased to 1%, TRUFFLE performed better and had a higher AC. However, when the error rates increased to 5%, even the tolerant KING showed a marked decrease in AC. Interestingly, our newly established model, the combination method, showed very stable ACs under different levels of genotyping error, which suggests that the model is reliable for identifying close relationships when the genotyping error rate is ultra-high. Overall, increasing the rate of genotyping errors has significant effects on kinship inference, especially when the rate exceeds 1%. MoM and the IBD segment-based method performed differently in response to genotyping errors. KING was the best performer, followed by TRUFFLE, IBIS, and GERMLINE. In addition, by taking advantage of the difference in the estimated kinship coefficients of the four tools, we showed that integrating MoM and the IBD segment-based methods can improve the tolerance to genotyping errors. 3.3. Performance Using Real Samples Ultimately, six intact (100 ng), 24 diluted (10 ng, 1 ng, 0.5 ng, and 0.1 ng), and 24 degraded (1500 bp, 800 bp, 400 bp, and 150 bp) DNA samples were employed to assess the performance of each approach on challenging samples. After genotyping and filtering, 302,756 SNPs were retained. Subsequently, the original DNA as a reference and the rates and types of genotyping errors for these diluted and degraded DNA samples were estimated . Three types of errors were studied: (1) drop in error, which occurs when an additional allele is reported, e.g., from “AA” to “AG”; (2) drop out error, which occurs when an allele is absent from the sample, e.g., from “AG” to “AA”; and (3) switch error, which occurs when opposite homozygous genotypes are reported, e.g., from “AA” to “GG”. As shown in , an increase in error rate was observed with the reduction of DNA input and with the increasing levels of degradation. When the amount of input DNA exceeds 0.5 ng or the length of the DNAs fragment surpasses 400 bp, the error rates were relatively low (<0.012). The predominant type of genotyping error in these instances was drop in error. However, a notable increase in error rates (up to 0.070) was observed when the amount of input DNA decreased to 0.1 ng, and the dominant error type changed to drop out. Furthermore, a significant escalation in error rates is also observed when the length of the DNA fragments decreased to 150 bp, with an overall error rate as high as 0.137. Generally, we can obtain high-quality DNA samples from a reference (either a reference sample from a database or the person of interest), but crime scene samples are mostly of low quality. Therefore, we paired each intact DNA with corresponding diluted or degraded samples, resulting in 72, 36, 18, 36, 54, 36, and 18 pairs of first- to seventh-degree relationships, respectively. The kinship coefficient estimation and kinship inference of these pairs were performed using KING, IBIS, TRUFFLE, GERMLINE, and the combination method, respectively . illustrates that under moderate quality or quantity conditions (input DNA ≥ 0.5 ng or average fragment length ≥ 400 bp), KING, IBIS, TRUFFLE, and the combination method performed comparably and correctly identified most close relationships (up to second-degree relationships). However, GERMLINE performed rather poorly, possibly due to phasing errors caused by high missing data and forced phasing (using the flag -force through SHAPEIT). For distant relationships (fifth- to seventh-degree relationships), all approaches incorrectly identified most pairs. TRUFFLE and GERMLINE misidentified most pairs (70 pairs) as unrelated, while KING and IBIS correctly identified 15 and 14 pairs. The combination method achieved the highest number of correct identifications (17 pairs) but misidentified 38 pairs as relationships within one degree of difference and 28 pairs as unrelated. Under more challenging conditions, i.e., when input DNA decreased to 0.1 ng or the average fragment length decreased to 150 bp, correct identifications were limited to a few close relationships. The combination method performed best for first-degree relationships, correctly identifying six out of sixteen pairs, followed by TRUFFLE with four, KING with two, IBIS with one, and GERMLINE with none. For second-degree relationships, all approaches except GERMLINE correctly identified only one out of eight pairs. In summary, when dealing with samples of moderate quality or quantity (genotyping error rates below 1%), KING and IBIS have better performance and correctly identify most of the first- to fourth-degree relationships. However, when faced with more challenging samples (genotyping error rates close to 5–10%), the combination method outperformed the existing tools for close kinship inference. shows the distribution of estimated θ values for different panels and approaches. As expected, kinship coefficients for all the relationships fluctuated around the theoretical values , but the four approaches performed differently. Specifically, θ K I N G showed little change compared with that of the 5265 K panel, even at the smallest number of SNPs (5 K), regardless of increased variations. In contrast, although θ I B I S , θ T R U F F L E , and θ G E R M L I N E remained largely unchanged with above 164 K SNPs, they showed larger variations and gradually shifted to the left with fewer SNPs. As a result, the kinship coefficients estimated by IBIS, TRUFFLE, and GERMLINE overlapped and became difficult to distinguish for different relationships. Notably, the value of θ K I N G may be less than 0, while there is a lower limit (0) for the values of θ I B I S , θ T R U F F L E , and θ G E R M L I N E since they are converted from the length of IBD segments. To quantitively evaluate the performance of the four approaches with these virtual panels (ranging from 5265 K to 5 K), the overlapping rate, Sen, PPV, and AC were assessed, and the results are shown in . These four indicators showed a downward trend as the number of SNPs decreased but differed for different approaches. For KING, it had low Sen and PPV for fourth-degree or more distant relationships when large numbers of SNPs were used, thus resulting in low overall accuracy. However, the four indicators all decreased slowly and gradually with decreasing SNPs, indicating that KING is robust in decreasing SNP numbers. In contrast, IBIS, TRUFFLE, and GERMLINE all showed a sharp turning point, except for a few interesting changes (explained below). Specifically, the turning points of both Sen and PPV were observed at 82 K for first- and second-degree relationships and at 164 K for third- to seventh-degree relationships, suggesting that 164 K may be a minimum number for effective kinship inference. With respect to overlapping rates, when IBIS and GERMLINE were used, there were also turning points at 164 K for first- to fifth-degree relationships, but the overlapping rates remained unchanged for sixth- and seventh-degree relationships, as well as for unrelated pairs. The latter can be explained by the small θ values and large overlapping areas for these relationships. For the three types of relationships, the lower bounds were all 0, and they had similar upper bounds regardless of the number of SNPs , thus resulting in overlapping rates all close to 1 . Similar results were observed for first- to seventh-degree relationships when TRUFFLE was employed. However, an unexpected sharp decrease and then an increase were observed at 164 K for unrelated pairs. Indeed, for each relationship, θ T R U F F L E was slightly underestimated compared to the theoretical values when using the 5265 K panel, and when using the 164 K panel, θ T R U F F L E was distributed as expected . We speculated that when the number of SNPs was redundant, the underlying linkage disequilibrium may cause θ to be overestimated. This is why TRUFFLE performed best with the 164 K panel rather than with the 5265 K panel ( D). In fact, the authors of TRUFFLE may have considered this problem and recommend pruning markers to 100–500 K before inputting. “ https://adimitromanolakis.github.io/truffle-website/index.html (accessed on 10 November 2023)”. On the whole, when the number of SNPs was sufficient (more than 164 K), IBIS performed best, with an AC equal to 0.796–0.772, followed by GERMLINE (AC = 0.774–0.733), TRUFFLE (AC = 0.749–0.714), and KING (AC = 0.709–0.691). Therefore, IBIS shows superiority over the other three methods, especially for the inference of distant relationships. When the number of SNPs was approximately 82 K, all the approaches performed comparably, with TRUFFLE reaching the best AC (0.712). However, when the number of SNPs was even smaller, only KING (AC = 0.672–0.573) made relatively effective inferences. In summary, decreasing the number of SNPs has little effect on kinship inference for both the MoM and IBD segment-based methods when the number of SNPs is more than 164 K, at which point overlapping rates for all relationships are all over 0.99 for each approach (except for unrelated pairs using TRUFFLE). The effect becomes non-negligible for IBD segment-based methods when the number of SNPs is below 164 K. Therefore, we considered the 164 K subset as the minimum panel for kinship inference and employed it for genotyping error analyses. As shown in , the estimated kinship coefficients changed accordingly with increasing genotyping errors, and a high genotyping error rate caused a significant reduction in θ values. We found that θ K I N G , θ I B I S , θ T R U F F L E , and θ G E R M L I N E remained essentially unchanged at low error rates (0.1%), whereas the four tools performed differently when genotyping errors increased. Specifically, θ K I N G values of first- to fourth-degree relationships tended to decrease slightly, while those of fifth-degree or more distant relationships (including unrelated pairs) tended to increase. Interestingly, reduced variations were observed when the error rate exceeded 1%. In contrast, θ I B I S , θ T R U F F L E , and θ G E R M L I N E all tended to decrease when the error rates exceeded 0.1%, 1%, and 0.1%, respectively. Furthermore, the higher the genotyping error rate, the bigger the difference in kinship coefficients for the four approaches. By leveraging this phenomenon, we attempted to correct the reduction in kinship coefficients caused by genotyping errors. First, we averaged the θ values estimated by the four approaches when the genotyping error rate was 0 and took it (named “expected kinship coefficient”) as the dependent variable. Then, 13 variables were constructed, including θ KING , θ IBIS , θ TRUFFLE , θ GERMLINE , θ Δ ( KING , IBIS ) , θ Δ ( KING , TRUFFLE ) , θ Δ ( KING , GERMLINE ) , θ Δ ( IBIS , TRUFFLE ) , θ Δ ( IBIS , GERMLINE ) , θ Δ ( TRUFFLE , GERMLINE ) , θ ( KING / IBIS ) , θ ( TRUFFLE / IBIS ) , and θ ( GERMLINE / IBIS ) , and stepwise multiple linear regression was performed with a 10-fold cross-validation using the packages caret , leaps , and MASS in R. Aside from the four parameters θ KING , θ IBIS , θ TRUFFLE , θ GERMLINE , other variables were constructed in order to measure the difference in output values between these approaches: θ Δ ( A , B ) = θ A − θ B θ ( A / B ) = θ A ÷ θ B , where A and B each represent one of the four approaches. Note that since IBIS calculates θ with a supplemental kinship coefficient factor (default = 0.00138), only θ IBIS will not be 0, and is therefore considered the denominator when calculating θ ( A / B ) . As shown in , several models were constructed, and we found that the model M7 fit the best, hereafter referred to as the combination. This model was quite robust regarding genotyping errors, and the estimated kinship coefficients were maintained throughout the error ranges . As shown in , the θ combination values were closer to the expected values than those of any single approach alone, especially when the genotyping error rate exceeded 1%. Given this, integrating MoM and IBD segment-based methods has the potential to improve the tolerance to genotyping errors. Again, in order to quantitively evaluate performance in the presence of genotyping error for the four tools and our newly established model, the overlapping rate (treating 164 K SNP panel with no genotyping errors as the reference panel), Sen, PPV, and AC were assessed. As shown in , these four indicators all remained stable when the error rate was low. When the genotyping error rate increased further, these four indicators of KING and the combination method remained stable, while those of IBIS, TRUFFLE, and GERMLINE showed a sharp decrease at the 0.05 error rate, indicating that KING and the combination method are more robust to genotyping error. Notably, Sen decreased gradually with increasing genotyping error rates for first- to third-degree as well as unrelated relationships but increased for fourth- to seventh-degree relationships. This was consistent with the distributions of θ K I N G in , which showed narrow variations and were slightly overestimated with high genotyping error. A similar explanation can also be applied for the trends of overlapping rates, which remained close to 1 as the genotyping error rate increased, except for first-degree pairs with KING. In addition, the overlapping rates for IBIS, TRUFFLE, and GERMLINE also remained close to 1 for distant relationships. This could be explained by a failure of IBD segment detection and a large proportion of θ equal to the lower limit (0). In terms of accuracy, when the genotyping error rate was relatively low, IBIS outperformed the other three tools. When the error rates increased to 1%, TRUFFLE performed better and had a higher AC. However, when the error rates increased to 5%, even the tolerant KING showed a marked decrease in AC. Interestingly, our newly established model, the combination method, showed very stable ACs under different levels of genotyping error, which suggests that the model is reliable for identifying close relationships when the genotyping error rate is ultra-high. Overall, increasing the rate of genotyping errors has significant effects on kinship inference, especially when the rate exceeds 1%. MoM and the IBD segment-based method performed differently in response to genotyping errors. KING was the best performer, followed by TRUFFLE, IBIS, and GERMLINE. In addition, by taking advantage of the difference in the estimated kinship coefficients of the four tools, we showed that integrating MoM and the IBD segment-based methods can improve the tolerance to genotyping errors. Ultimately, six intact (100 ng), 24 diluted (10 ng, 1 ng, 0.5 ng, and 0.1 ng), and 24 degraded (1500 bp, 800 bp, 400 bp, and 150 bp) DNA samples were employed to assess the performance of each approach on challenging samples. After genotyping and filtering, 302,756 SNPs were retained. Subsequently, the original DNA as a reference and the rates and types of genotyping errors for these diluted and degraded DNA samples were estimated . Three types of errors were studied: (1) drop in error, which occurs when an additional allele is reported, e.g., from “AA” to “AG”; (2) drop out error, which occurs when an allele is absent from the sample, e.g., from “AG” to “AA”; and (3) switch error, which occurs when opposite homozygous genotypes are reported, e.g., from “AA” to “GG”. As shown in , an increase in error rate was observed with the reduction of DNA input and with the increasing levels of degradation. When the amount of input DNA exceeds 0.5 ng or the length of the DNAs fragment surpasses 400 bp, the error rates were relatively low (<0.012). The predominant type of genotyping error in these instances was drop in error. However, a notable increase in error rates (up to 0.070) was observed when the amount of input DNA decreased to 0.1 ng, and the dominant error type changed to drop out. Furthermore, a significant escalation in error rates is also observed when the length of the DNA fragments decreased to 150 bp, with an overall error rate as high as 0.137. Generally, we can obtain high-quality DNA samples from a reference (either a reference sample from a database or the person of interest), but crime scene samples are mostly of low quality. Therefore, we paired each intact DNA with corresponding diluted or degraded samples, resulting in 72, 36, 18, 36, 54, 36, and 18 pairs of first- to seventh-degree relationships, respectively. The kinship coefficient estimation and kinship inference of these pairs were performed using KING, IBIS, TRUFFLE, GERMLINE, and the combination method, respectively . illustrates that under moderate quality or quantity conditions (input DNA ≥ 0.5 ng or average fragment length ≥ 400 bp), KING, IBIS, TRUFFLE, and the combination method performed comparably and correctly identified most close relationships (up to second-degree relationships). However, GERMLINE performed rather poorly, possibly due to phasing errors caused by high missing data and forced phasing (using the flag -force through SHAPEIT). For distant relationships (fifth- to seventh-degree relationships), all approaches incorrectly identified most pairs. TRUFFLE and GERMLINE misidentified most pairs (70 pairs) as unrelated, while KING and IBIS correctly identified 15 and 14 pairs. The combination method achieved the highest number of correct identifications (17 pairs) but misidentified 38 pairs as relationships within one degree of difference and 28 pairs as unrelated. Under more challenging conditions, i.e., when input DNA decreased to 0.1 ng or the average fragment length decreased to 150 bp, correct identifications were limited to a few close relationships. The combination method performed best for first-degree relationships, correctly identifying six out of sixteen pairs, followed by TRUFFLE with four, KING with two, IBIS with one, and GERMLINE with none. For second-degree relationships, all approaches except GERMLINE correctly identified only one out of eight pairs. In summary, when dealing with samples of moderate quality or quantity (genotyping error rates below 1%), KING and IBIS have better performance and correctly identify most of the first- to fourth-degree relationships. However, when faced with more challenging samples (genotyping error rates close to 5–10%), the combination method outperformed the existing tools for close kinship inference. MoM and the IBD segment-based method are both widely used for kinship inference by scientists and investigators. However, there is still a lack of evaluation regarding which approach should be used in forensic practice. In this study, we compared the performance of four common tools given different numbers of SNPs and different levels of genotyping error and explored the potential to improve the tolerance to genotyping errors by integrating MoM and the IBD segment-based method. All four approaches had high stability when the number of SNPs was larger than 164 K, and three IBD segment-based tools had higher accuracy in identifying distant relationships, which was consistent with previous studies . However, with fewer than 82 K SNPs, only KING could provide relatively reliable inferences, albeit with significantly reduced efficiency compared to scenarios with more SNPs. It should be noted that although the efficiency of all four approaches decreased as the number of SNPs decreased, the reasons are slightly different. KING, as a method of moment (MoM) estimator, estimates the proportion of IBD segments based on the number of identity-by-state SNP markers. As the number of SNPs decreases, the random effect increases, and the estimated kinship coefficients show greater variation, causing KING to show a gradual decrease in efficiency. In contrast, IBIS, TRUFFLE, and GERMLINE, as IBD segment-based tools, identify segments based on specific thresholds related to IBD length, the distance between markers, and/or the number of markers. As the number of SNPs decreases, more IBD segments fail to meet the thresholds. In addition, as the number of markers decreases, the distance between SNPs also increases and may be greater than the threshold, which may cause an IBD segment to be interrupted and fail to pass the IBD length threshold. All these factors can lead to a decrease in the efficiency of IBD detection and subsequent kinship inference. Generally, if a sample is of good quality, the output data are expected to have low genotyping error. For these samples, if fewer than 82 K SNPs are obtained, MoM estimators (e.g., KING) are recommended, whereas if more than 164 K SNPs are obtained, IBD segment-based tools (e.g., IBIS) are preferred. In contrast, if a sample is of poor quality and is expected to have a high genotyping error, both MoM and the IBD segment-based methods exhibit a significant decline, especially when the genotyping error rate exceeds 1%. However, MoM is more appropriate than the IBD segment-based methods for these challenging samples. We also showed that the combination of MoM and the IBD segment-based methods could improve the accuracy of identifying close relationships, even at ultra-high genotyping error rates (above 5%). Therefore, if the sample quality is uncertain, we recommend adopting both MoM estimators and IBD segment-based tools. More importantly, if there is a big difference in the estimated kinship coefficients between the two types of methods, we recommend adopting models that combine all the outputs of the related tools. Several notes need to be made. First, although GERMLINE is considered one of the most accurate methods , its use in forensic practice is questionable. As demonstrated in , when dealing with genotype data of high missing rates, phasing may introduce additional errors, which in turn may decrease the accuracy. Given that the quality of forensic samples is often unknown and varies significantly, GERMLINE may not be suitable for forensic DNA analysis. In addition, although both simulated data and mock challenging samples showed that the combined method had an increased accuracy for challenging samples, the model was based on simulated datasets of 164 K SNPs and with genotyping error rates of 0–10%. However, in forensic practice, the genotyping error rates may exceed this boundary and have reduced performance. Finally, although adjusting parameters may theoretically lead to better performance, previous studies showed that there was no combination of reasonably permissive parameters that could rescue the performance of existing methods when the genotyping error was in the 1–5% range . Furthermore, the analyses of real samples indicated that it was still difficult to correctly infer relationships with 0.1 ng DNA or with DNA of average fragment lengths of 150 bp. Therefore, there is an urgent need to develop new algorithms to address this issue. For example, Snedecor et al. proposed an IBD-based method that was accurate up to fifth-degree relatives using only 10,000 SNPs. This windowed kinship algorithm uses thresholds that are slightly lower than theoretical values, making it also relatively robust to genotyping error. Additionally, there are also several tools, such as optimized MOM algorithms and machine learning , that have been developed and have shown their robustness in recent years. These methods are promising and will be investigated in our future work. In addition, methods designed for ancient DNA may also be promising alternative tools in FIGG . Benefitting from the development of public databases, FIGG has grown rapidly since the well-known Golden State killer case was solved and has been used to solve hundreds of active and cold cases . However, there are still many problems to be addressed. For example, different algorithms are used for different databases and the performance needs to be evaluated, especially with respect to crime scene samples. In addition, since investigators need to upload high-density SNP data of case samples to large-scale public databases , concerns about privacy and data loss have been raised . Therefore, maintaining a balance between privacy protection and efficient application is also an issue that needs to be addressed in the future . The existing methods are sufficient for kinship inference (first- to seventh-degree and unrelated relationships) using genotyping data with more than 164 K SNPs and less than 1% genotyping error. MoM estimators need significantly fewer SNPs and are more robust for genotyping errors, while IBD segment-based methods are more effective in identifying distant relationships. If a sample is of good quality, IBD segment-based tools such as IBIS are preferred; otherwise, MoM estimators should be used. In addition, the combination of both types of methods could improve performance when the genotyping error is high, and it is promising for challenging forensic samples. This study sheds light on how to select the appropriate method based on the number of SNPs or the genotyping error rate in FIGG or a complex. |
A Highly Rare Complication: Right Obturator Artery Pseudoaneurysm in a Crohn’s Disease Patient Due to Multiple Perianal Abscesses and Drainages. | 19fc6e5d-ddf8-44ec-b992-ce0e3b760a79 | 11813214 | Surgical Procedures, Operative[mh] | BACKGROUND The origin of the obturator artery is the anterior branch of the internal iliac artery. It exits the pelvic region by passing through the obturator foramen, lateral to Cooper’s ligament. Its main function is to provide blood supply to the surrounding tissues near the upper obturator foramen as well as certain parts of the acetabulum . In 19% of cases, the obturator artery arises from the external iliac artery, and therefore, makes them more superficial and more vulnerable during the insertion of femoral central venous catheter . When the obturator artery is injured, it can lead to severe and uncontrolled bleeding. The injury can manifest as severe extravasation or as a pseudoaneurysm formation. These complications have the potential to cause significant and life-threatening hemorrhage. OBJECTIVE To provide a full report of the first documented case of an obturator artery pseudoaneurysm as a complication arising from perianal involvement in a patient with Crohn’s disease. This detailed analysis will emphasize the diagnostic challenges and management strategies employed, aiming to enhance awareness among clinicians about the potential for vascular complications in patients suffering from complex perianal diseases. The report illustrates the importance of a multidisciplinary approach for optimal diagnosis and treatment in similar cases. CASE PRESENTATION This is a 39-year-old man with longstanding Crohn’s disease with severe perianal fistulas and abscesses. He has undergone perianal abscess drainage procedures seven times previously. He presented to the Emergency Room with minimal bloody discharge originating from his perianal fistula, a symptom started one month prior. Upon examination, the patient exhibited multiple scars indicative of prior surgical interventions with two external openings positioned at the 7 and 9 o’clock around the anus, accompanied by a slight tinge of blood. The patient’s underwear was found to be soaked with a small amount of blood. However, no active bleeding was observable from either the fistula or the rectum. The laboratory work up was unremarkable apart from very minimal drop of hemoglobin (Hgb= 12.2). Due to the patient stable condition, MRI was scheduled on routine bases to follow up his preexisted perianal disease. The MRI showed an unexpected finding which were suggestive of a pseudoaneurysm. Further urgent evaluation by ultrasound [figures 2] and CT angiogram [figures 3] of the pelvis were done [kindly refer to the images below]. Interventional radiology consultation is recommended and angiogram with embolization were done successfully . The patient had excellent immediate result. Multiple post operative clinical follow up shows no recurrence. DISCUSSION Patients with Crohn’s disease usually have perianal involvement in 19% of cases. Perianal fistulas were the most common manifestation, accounting for 52% of cases , while the incidence of perianal abscess reaches up to 48.4% . The incidence of obturator artery pseudoaneurysm in the context of multiple perianal abscess drainage in a patient with Crohn’s disease is exceedingly rare. There are no prior similar cases published in the literature. During pelvic procedures like abscess drainage as in our case, or inguinal hernioplasty and prostatectomy, the presence of retropubic and ischiorectal fat can make it difficult to clearly see and identify these small vessels. This increases the risk of iatrogenic injury. The obturator veins are also susceptible to injury. Timely identification of possible obturator artery injury is crucial to minimize the associated risks and negative outcomes, thereby reducing both morbidity and mortality. Early clinical recognition allows for prompt intervention and appropriate management, which can greatly improve patient outcomes. A contrast-enhanced CT scan can be utilized to reveal the presence of a hematoma or the leakage of contrast material into the pelvis. This imaging technique provides enhanced visualization of these abnormalities, aiding in the diagnosis and assessment of potential obturator artery injury . Interventional radiology plays a crucial role as the primary approach for both diagnosing and treating obturator artery injuries. The preferred method is super-selective transarterial embolization, a technique first described in the 1980s. This procedure involves accessing the injured artery through minimally invasive techniques, selectively blocking the blood flow to the affected area, and effectively treating the pseudoaneurysm. By utilizing interventional radiology and transarterial embolization, clinicians can achieve accurate diagnosis and provide targeted treatment for obturator artery injuries . CONCLUSION The incidence of an obturator artery pseudoaneurysm following multiple perianal abscess drainages in a patient with Crohn’s disease is exceedingly rare. Nevertheless, it should be considered in patients presenting with bloody perianal discharge, as it can result in life-threatening hemorrhage. |
Exploring Racial Disparities in Awareness and Perceptions of Oncology Clinical Trials: Cross-Sectional Analysis of Baseline Data From the mychoice Study | 0ff61e2c-97c2-4df8-9d78-cc9fde8e25cb | 11474127 | Health Literacy[mh] | Background The underrepresentation of racial and ethnic minoritized populations in cancer clinical trials is well-established , particularly among Black/African American adults . Despite federal initiatives and policies aimed at increasing cancer clinical trial enrollment and participation rates of underrepresented groups, rates have not improved among people from racial and ethnic minoritized groups, and in some cases, the rates have even declined . Attributable to factors across multiple levels of influence , the underrepresentation of Black/African American adults in cancer clinical trials means that drugs and interventions are developed, tested, and disseminated to populations not reflective of the broader US cancer population, perpetuating health inequities . For example, 1 study found that Black/African American adults comprised only 7.4% of all participants in US Food and Drug Administration clinical trials that led to new, approved cancer drugs from 2014 to 2018 . The participation-to-prevalence ratio reflects the representation of Black/African American adults in the clinical trial population relative to the general cancer population, where a ratio of 1 means there is identical or equal representation between groups. Across cancer types, the estimated participation-to-prevalence ratio for Black/African American US adults was 0.31, indicating significant underrepresentation in clinical trials that result in Food and Drug Administration approvals for cancer drugs . Importantly, Black/African American adults are also less likely to participate in trials of novel treatments and technologies, such as precision oncology . These disproportionately low rates of clinical trial participation among racial and ethnic minorities result in limited understanding by medical professionals and the greater research community of how well new diagnostic technology, treatment options, and supportive care services are working for racial and ethnic minorities in comparison to the predominantly White clinical trial participant population . In addition to underrepresentation in cancer clinical trials, inequities in cancer care and survival rates persist . Greater inclusion of Black/African American patients in cancer clinical trials is, therefore, essential to design and test interventions to address inequities in cancer care among Black/African American patients. For example, non-Hispanic Black/African American patients have significantly greater cancer diagnosis delay , treatment delay , and likelihood of diagnosis at an advanced cancer stage compared with non-Hispanic White patients. Even after accounting for cancer stage, cancer type, and other relevant covariates, Black/African American patients still have significantly lower survival rates than White patients . Prior studies have found that non-Hispanic, Black/African American patients have less awareness of cancer clinical trials and hold specific attitudes and beliefs about trial participation relative to non-Hispanic, White patients . For example, in a qualitative study of Black/African American cancer survivors who received cancer treatment at a safety-net hospital, the primary clinical trial participation barriers were (1) limited knowledge and understanding of cancer clinical trials and (2) medical mistrust, fears, and other negative perceptions of cancer clinical trials. Participants also described wanting a peer (cancer survivor of a concordant race or ethnicity group) patient navigator who was well-versed in clinical trials knowledge and who could provide other forms of social support (eg, social or emotional, faith-based or spiritual, and instrumental support) . These results were consistent with other studies emphasizing the roles of knowledge or awareness, medical mistrust, and social support in clinical trial enrollment; study participation; and retention over time . Other specific attitudes held by Black/African American patients with cancer more than White patients include lower perceived cancer susceptibility and greater doubt about the usefulness and feasibility of translating cancer clinical trial results into clinical practice . Other patient-level factors associated with less knowledge and awareness of cancer clinical trials include living in a rural area , living farther away from universities or large hospital networks , older age , limited English language proficiency , lower educational attainment , and less annual household income . Conversely, greater cancer clinical trial knowledge and the likelihood of trial participation are associated with a prior cancer diagnosis , having a routine source of health care (ie, primary care access) , and higher educational attainment . Trial populations’ clinical knowledge and awareness are essential constructs for researchers to be aware of because the quality of communication between clinical trial staff and prospective trial participants is, in part, dependent upon patients’ clinical trial knowledge and confidence . Negative attitudes toward cancer clinical trials, particularly having greater concerns, are associated with cancer fatalism . Other concerns cited by Black/African American patients with cancer associated with decreased cancer clinical trial intentions are greater fear of the unknown , fear of death , prior negative health care or clinical trial experiences , fear of receiving an inferior treatment or placebo , lower health literacy , anticipated discrimination , and medical mistrust . Structural racism, historical injustices, and unethical research practices have disproportionately affected Black/African American people and have perpetuated concerns of anticipated mistreatment by research personnel and broader medical mistrust . However, levels of cancer-related knowledge and specific attitudes toward cancer clinical trials are associated with cancer clinical trial participation rates among Black/African American patients with cancer. For example, a qualitative study among Black men found that perceptions of greater research integrity and transparency were positively associated with willingness to participate in prostate cancer surveillance screening and clinical trials . Other factors positively associated with willingness to participate in cancer research were having a family history of cancer, seeing greater value in screening and cancer prevention, and having more interest in learning about cancer and other health-related information . At the interpersonal level, Black/African American patients with cancer have differential access to cancer clinical trial information attributable to provider biases and patient-provider communication quality. For example, clinical trials are often initially discussed with patients by their health care providers, but provider bias, including racism and discrimination, results in less information sharing and discussion about cancer screenings, clinical trials, and cancer treatment options for Black/African American patients than for White patients . At the clinic level, limited hiring of providers with language fluency beyond English reduces clinic access and decreases the feasibility of within-session information sharing about clinical trials for patients and families with limited English language proficiency . Importantly, many Black/African American patients report not being offered a trial during their cancer care , despite overall positive perceptions of clinical trials, further exacerbating the inequity . Finally, it should be noted that individual-level awareness of clinical trials is only minimally helpful as an interventional target when structural and systemic factors more strongly drive participation rates. For example, studies have repeatedly demonstrated that some of the greatest barriers to clinical trial enrollment are inequitable clinical trial referrals and enrollment practices and stringent trial eligibility criteria . Recent programs and initiatives implemented to increase awareness of cancer clinical trials among Black/African American patients have recognized that awareness must be addressed at multiple levels of influence to advance health equity. For example, a June 2022 article published by the American Society of Clinical Oncology suggests that clinics and health care facilities use 1 of 2 standardized clinic self-assessment tools to review their enrollment practices and patient-, provider-, and system-level barriers to clinical trial enrollment . This study is a cross-sectional analysis of the baseline data from a parent randomized controlled trial (RCT) designed to evaluate the impact of a multicultural, clinical trial preparatory digital health tool (mychoice) or standard National Cancer Institute information for patients with cancer. mychoice was conceptualized and developed by a team of investigators at Fox Chase Cancer Center and the Temple University College of Public Health through extensive formative research with Black/African American patients, expertise in health disparities and clinical trial participation, commercial marketing techniques (perceptual mapping and vector message modeling), and best practices in digital health and patient engagement . Although founded on clinical trial participation barriers significant to underrepresented patients, the tool is designed to be appropriate for all patients with cancer and to represent diverse patient perspectives. Objectives A diverse sample of patients enrolled in the parent RCT completed a baseline survey before viewing the decision-making tool, providing an opportunity to explore racial disparities in a variety of factors previously linked to clinical trial participation rates and the clinical trial participation decision-making process. On the basis of the conducted formative work with Black/African American patients to inform the digital health tool used in the parent RCT, this study sought to confirm whether factors identified in the formative work were, in fact, salient to Black/African American patients with cancer relative to non–Black/African American patients with cancer at baseline. Findings will help explain Black/African American versus non–Black/African American participant responses to the culturally tailored, clinical trial decision-making tool and also help identify factors that could help further refine the decision-making tool. In addition, findings can be used to tailor and prioritize topics in provider education and training to better support the needs of Black/African American patients with cancer in cancer clinical trial decision-making.
The underrepresentation of racial and ethnic minoritized populations in cancer clinical trials is well-established , particularly among Black/African American adults . Despite federal initiatives and policies aimed at increasing cancer clinical trial enrollment and participation rates of underrepresented groups, rates have not improved among people from racial and ethnic minoritized groups, and in some cases, the rates have even declined . Attributable to factors across multiple levels of influence , the underrepresentation of Black/African American adults in cancer clinical trials means that drugs and interventions are developed, tested, and disseminated to populations not reflective of the broader US cancer population, perpetuating health inequities . For example, 1 study found that Black/African American adults comprised only 7.4% of all participants in US Food and Drug Administration clinical trials that led to new, approved cancer drugs from 2014 to 2018 . The participation-to-prevalence ratio reflects the representation of Black/African American adults in the clinical trial population relative to the general cancer population, where a ratio of 1 means there is identical or equal representation between groups. Across cancer types, the estimated participation-to-prevalence ratio for Black/African American US adults was 0.31, indicating significant underrepresentation in clinical trials that result in Food and Drug Administration approvals for cancer drugs . Importantly, Black/African American adults are also less likely to participate in trials of novel treatments and technologies, such as precision oncology . These disproportionately low rates of clinical trial participation among racial and ethnic minorities result in limited understanding by medical professionals and the greater research community of how well new diagnostic technology, treatment options, and supportive care services are working for racial and ethnic minorities in comparison to the predominantly White clinical trial participant population . In addition to underrepresentation in cancer clinical trials, inequities in cancer care and survival rates persist . Greater inclusion of Black/African American patients in cancer clinical trials is, therefore, essential to design and test interventions to address inequities in cancer care among Black/African American patients. For example, non-Hispanic Black/African American patients have significantly greater cancer diagnosis delay , treatment delay , and likelihood of diagnosis at an advanced cancer stage compared with non-Hispanic White patients. Even after accounting for cancer stage, cancer type, and other relevant covariates, Black/African American patients still have significantly lower survival rates than White patients . Prior studies have found that non-Hispanic, Black/African American patients have less awareness of cancer clinical trials and hold specific attitudes and beliefs about trial participation relative to non-Hispanic, White patients . For example, in a qualitative study of Black/African American cancer survivors who received cancer treatment at a safety-net hospital, the primary clinical trial participation barriers were (1) limited knowledge and understanding of cancer clinical trials and (2) medical mistrust, fears, and other negative perceptions of cancer clinical trials. Participants also described wanting a peer (cancer survivor of a concordant race or ethnicity group) patient navigator who was well-versed in clinical trials knowledge and who could provide other forms of social support (eg, social or emotional, faith-based or spiritual, and instrumental support) . These results were consistent with other studies emphasizing the roles of knowledge or awareness, medical mistrust, and social support in clinical trial enrollment; study participation; and retention over time . Other specific attitudes held by Black/African American patients with cancer more than White patients include lower perceived cancer susceptibility and greater doubt about the usefulness and feasibility of translating cancer clinical trial results into clinical practice . Other patient-level factors associated with less knowledge and awareness of cancer clinical trials include living in a rural area , living farther away from universities or large hospital networks , older age , limited English language proficiency , lower educational attainment , and less annual household income . Conversely, greater cancer clinical trial knowledge and the likelihood of trial participation are associated with a prior cancer diagnosis , having a routine source of health care (ie, primary care access) , and higher educational attainment . Trial populations’ clinical knowledge and awareness are essential constructs for researchers to be aware of because the quality of communication between clinical trial staff and prospective trial participants is, in part, dependent upon patients’ clinical trial knowledge and confidence . Negative attitudes toward cancer clinical trials, particularly having greater concerns, are associated with cancer fatalism . Other concerns cited by Black/African American patients with cancer associated with decreased cancer clinical trial intentions are greater fear of the unknown , fear of death , prior negative health care or clinical trial experiences , fear of receiving an inferior treatment or placebo , lower health literacy , anticipated discrimination , and medical mistrust . Structural racism, historical injustices, and unethical research practices have disproportionately affected Black/African American people and have perpetuated concerns of anticipated mistreatment by research personnel and broader medical mistrust . However, levels of cancer-related knowledge and specific attitudes toward cancer clinical trials are associated with cancer clinical trial participation rates among Black/African American patients with cancer. For example, a qualitative study among Black men found that perceptions of greater research integrity and transparency were positively associated with willingness to participate in prostate cancer surveillance screening and clinical trials . Other factors positively associated with willingness to participate in cancer research were having a family history of cancer, seeing greater value in screening and cancer prevention, and having more interest in learning about cancer and other health-related information . At the interpersonal level, Black/African American patients with cancer have differential access to cancer clinical trial information attributable to provider biases and patient-provider communication quality. For example, clinical trials are often initially discussed with patients by their health care providers, but provider bias, including racism and discrimination, results in less information sharing and discussion about cancer screenings, clinical trials, and cancer treatment options for Black/African American patients than for White patients . At the clinic level, limited hiring of providers with language fluency beyond English reduces clinic access and decreases the feasibility of within-session information sharing about clinical trials for patients and families with limited English language proficiency . Importantly, many Black/African American patients report not being offered a trial during their cancer care , despite overall positive perceptions of clinical trials, further exacerbating the inequity . Finally, it should be noted that individual-level awareness of clinical trials is only minimally helpful as an interventional target when structural and systemic factors more strongly drive participation rates. For example, studies have repeatedly demonstrated that some of the greatest barriers to clinical trial enrollment are inequitable clinical trial referrals and enrollment practices and stringent trial eligibility criteria . Recent programs and initiatives implemented to increase awareness of cancer clinical trials among Black/African American patients have recognized that awareness must be addressed at multiple levels of influence to advance health equity. For example, a June 2022 article published by the American Society of Clinical Oncology suggests that clinics and health care facilities use 1 of 2 standardized clinic self-assessment tools to review their enrollment practices and patient-, provider-, and system-level barriers to clinical trial enrollment . This study is a cross-sectional analysis of the baseline data from a parent randomized controlled trial (RCT) designed to evaluate the impact of a multicultural, clinical trial preparatory digital health tool (mychoice) or standard National Cancer Institute information for patients with cancer. mychoice was conceptualized and developed by a team of investigators at Fox Chase Cancer Center and the Temple University College of Public Health through extensive formative research with Black/African American patients, expertise in health disparities and clinical trial participation, commercial marketing techniques (perceptual mapping and vector message modeling), and best practices in digital health and patient engagement . Although founded on clinical trial participation barriers significant to underrepresented patients, the tool is designed to be appropriate for all patients with cancer and to represent diverse patient perspectives.
A diverse sample of patients enrolled in the parent RCT completed a baseline survey before viewing the decision-making tool, providing an opportunity to explore racial disparities in a variety of factors previously linked to clinical trial participation rates and the clinical trial participation decision-making process. On the basis of the conducted formative work with Black/African American patients to inform the digital health tool used in the parent RCT, this study sought to confirm whether factors identified in the formative work were, in fact, salient to Black/African American patients with cancer relative to non–Black/African American patients with cancer at baseline. Findings will help explain Black/African American versus non–Black/African American participant responses to the culturally tailored, clinical trial decision-making tool and also help identify factors that could help further refine the decision-making tool. In addition, findings can be used to tailor and prioritize topics in provider education and training to better support the needs of Black/African American patients with cancer in cancer clinical trial decision-making.
Participants The analytical sample at baseline included patients with cancer from 4 leading cancer centers in Philadelphia (Fox Chase Cancer Center, Temple University Hospital, University of Pennsylvania’s Abramson Cancer Center, and Thomas Jefferson University’s Sidney Kimmel Cancer Center) who consented to participate in the parent RCT (NCT03427177) and completed the baseline survey. Moreover, 3 of the 4 recruitment sites are National Cancer Institute–designated cancer centers. Eligible patients were actively being treated for cancer or in follow-up care (ie, within 6 months of definitive treatment), aged ≥18 years, able to speak and read English, and had not participated in a therapeutic clinical trial. The parent RCT had been planned to enroll 270 participants. In total, 257 participants consented and 249 (96.9%) completed the baseline survey. Patients of all racial and ethnic groups were eligible for the RCT, but only 244 (98%) of the 249 completed baselines reported valid or nonmissing data for their race and were analyzed in this study. Instruments Overview The survey was developed using both validated instruments and study-related measures from formative work, including both qualitative interviews and surveys with Black/African American patients with cancer . Variables included in the present analyses were sociodemographic characteristics (ie, age, race, ethnicity, gender, income, educational attainment, insurance type, and cohabitation status), dichotomized race group (Black/African American vs non–Black/African American), clinical characteristics (ie, cancer stage and treatment status), general clinical trial knowledge, health literacy, cancer clinical trial perceptions (awareness, benefits, concerns, and cancer and health care experiences beliefs about health care providers and health), patient activation in cancer care, patient self-advocacy, self-efficacy in health care interactions, decisional conflict, and clinical trial intentions. General Knowledge of Clinical Trials General knowledge of clinical trials was assessed using 16 revised items from Knowledge of Clinical Trials scale by Campbell et al . Response options were “true” or “false” and were scored for accuracy. Scores were generated using the percentage of questions answered correctly, ranging from 0% to 100%. Health Literacy Health literacy was assessed with a single item from the Single Item Literacy Screener, which specifically identifies adults who may need assistance reading and understanding health materials . The item says, “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Response options were rated on a 5-point Likert scale, ranging from a score of “1” reflecting “never” to “5” reflecting “always.” On the basis of psychometric testing, scores >“2” reflect people with limited health literacy in reading and comprehending written health information . Cancer Clinical Trial Perceptions Perceptions of cancer clinical trials were evaluated using 48 items developed by the primary investigators through formative work, reflecting domains of (1) awareness, (2) benefits, (3) concerns, (4) cancer and health care experiences, and (5) beliefs about health care providers and health . Response options were rated on an 11-point Likert scale ranging from 0 to 10 where “0” indicated strong disagreement and “10” indicated strong agreement. Item-level analyses were conducted in this study. Patient Activation in Cancer Care Patient activation for cancer care decision-making was measured with 10-item Decisional Engagement Scale . This instrument was developed specifically to understand patients’ level of involvement in their cancer care and engagement with active decision-making processes around treatment and care options . Response options were rated on an 11-point Likert scale, ranging from 0 to 10 where “0” meant “doesn’t describe you at all” and “10” meant “perfectly describes you.” In psychometric evaluation, the 10-item Decisional Engagement Scale has demonstrated strong factor structure, reliability, and concurrent validity with health-related quality of life, shared decision-making preferences, and clarity about cancer care preferences . Patient Self-Advocacy Patient self-advocacy was measured with 12-item Patient Self-Advocacy Scale . Response options were rated on an 11-point Likert scale, ranging from 0 to 10. In addition, 1 item (“I don’t get what I need from my physician because I am not assertive enough”) was reverse coded before calculating an average summary score. The scale has demonstrated good internal consistency, construct validity, and criterion validity . Health Care Self-Efficacy Self-efficacy to engage with health care providers was measured with 10-item Perceived Self-Efficacy in Patient-Physical Interactions scale . Items asked about confidence to do specific health care–related tasks, such as confidence to get a physician to listen to them, confidence in ability to know what questions to ask a physician, and confidence in ability to get a physician to take their health concerns seriously. Response options ranged from 1 to 5, where “1” indicated least confidence and “5” indicated most confidence . Decisional Conflict Decisional conflict about clinical trial participation was measured with 13-item Decisional Conflict scale proposed by O’Connor . Response options were rated on a 5-point Likert scale, ranging from 0 to 4 where “0” reflected “strongly agree” and “4” reflected “strongly disagree.” Scoring of 4 subscales (uncertainty, informed, value clarity, and decision support) was done by summing the items within the subscale, dividing by the number of items within that subscale, and multiplying by 25. This resulted in a score ranging from 0 to 100. A total score for all items was also calculated by summing all items, dividing by 13, and multiplying by 25. This, too, led to a total score ranging from 0 to 100. In psychometric testing, the scale had good discriminant validity between those who choose versus those who do not choose to engage in a health behavior. Other psychometric properties were determined to be acceptable . Clinical Trial Participation Intentions Intentions to participate in a cancer clinical trial were assessed with a single, modified item from the Choice Predisposition Scale proposed by O’Connor . The item read, “We would like to know what your opinion is about your cancer treatment options at present. When your doctor asks you to make a choice about treatment methods, please indicate how strongly you agree or disagree that you would choose to participate in a clinical trial, if offered.” Response options ranged from 0 to 10, where “0” indicated strongly disagree,” a “5” meant “neither agree nor disagree,” and “10” indicated “strongly agree.” This scale has good psychometric properties, such as high test-retest validity, good construct validity, high sensitivity to change, and discriminant validity . Procedures Prospective participants were screened for eligibility (aged ≥18 years, cancer diagnosis, receiving current or follow-up care, English speaking, and had not previously participated in a clinical trial). Participants provided verbal informed consent either in person or over the phone. Consent was verified via an e-consent using REDCap (Research Electronic Data Capture; Vanderbilt University), a web-based application developed to capture data for research . Consented patients were randomized to intervention conditions via REDCap and completed a baseline survey prior to viewing any intervention content. The baseline survey assessments were web-based and were conducted through REDCap. Patients could either complete the study at the hospital using a study iPad (Apple Inc) or at home on their own devices. The baseline survey took approximately 45 minutes. Statistical Analysis Univariate statistics using means, SDs, and percentages are presented to characterize the participant sample. Differences in sociodemographic and clinical characteristics between dichotomous race groups (ie, Black/African American and non–Black/African American patients) were evaluated using chi-square tests of independence and independent sample 2-tailed t tests, as appropriate. Independent sample t tests were also used to examine for differences between Black/African American and non–Black/African American patients’ clinical trial knowledge, attitudes toward cancer clinical trials, and intentions to participate in a clinical trial. While some variables (eg, health literacy and self-efficacy in health care interactions) were highly skewed, t tests were still used as opposed to nonparametric testing because t tests are robust to skewed distributions when the sample size is >200 . Homogeneity of variances between groups was evaluated for each item before running independent samples t tests, and the appropriate t test assumptions were applied accordingly. All data analyses were conducted in StataSE (version 17.0; StataCorp). Ethical Considerations The study protocol was approved by the Fox Chase Cancer Center’s institutional review board (#17-8013). All procedures involving human participants were in accordance with the ethical standards of the institutional or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. All participants provided verbal informed consent. Verification of consent with e-consent and all other study data were collected in REDCap, a secure web-based application developed to collect and store research data . To protect participants’ privacy, the data were coded before analysis using unique participant study identifiers and no direct identifiers were in the analytic data set. Participants were compensated US $25 for completing the baseline survey, educational intervention, and the posttest survey. However, this paper describes results from the baseline survey data only.
The analytical sample at baseline included patients with cancer from 4 leading cancer centers in Philadelphia (Fox Chase Cancer Center, Temple University Hospital, University of Pennsylvania’s Abramson Cancer Center, and Thomas Jefferson University’s Sidney Kimmel Cancer Center) who consented to participate in the parent RCT (NCT03427177) and completed the baseline survey. Moreover, 3 of the 4 recruitment sites are National Cancer Institute–designated cancer centers. Eligible patients were actively being treated for cancer or in follow-up care (ie, within 6 months of definitive treatment), aged ≥18 years, able to speak and read English, and had not participated in a therapeutic clinical trial. The parent RCT had been planned to enroll 270 participants. In total, 257 participants consented and 249 (96.9%) completed the baseline survey. Patients of all racial and ethnic groups were eligible for the RCT, but only 244 (98%) of the 249 completed baselines reported valid or nonmissing data for their race and were analyzed in this study.
Overview The survey was developed using both validated instruments and study-related measures from formative work, including both qualitative interviews and surveys with Black/African American patients with cancer . Variables included in the present analyses were sociodemographic characteristics (ie, age, race, ethnicity, gender, income, educational attainment, insurance type, and cohabitation status), dichotomized race group (Black/African American vs non–Black/African American), clinical characteristics (ie, cancer stage and treatment status), general clinical trial knowledge, health literacy, cancer clinical trial perceptions (awareness, benefits, concerns, and cancer and health care experiences beliefs about health care providers and health), patient activation in cancer care, patient self-advocacy, self-efficacy in health care interactions, decisional conflict, and clinical trial intentions. General Knowledge of Clinical Trials General knowledge of clinical trials was assessed using 16 revised items from Knowledge of Clinical Trials scale by Campbell et al . Response options were “true” or “false” and were scored for accuracy. Scores were generated using the percentage of questions answered correctly, ranging from 0% to 100%. Health Literacy Health literacy was assessed with a single item from the Single Item Literacy Screener, which specifically identifies adults who may need assistance reading and understanding health materials . The item says, “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Response options were rated on a 5-point Likert scale, ranging from a score of “1” reflecting “never” to “5” reflecting “always.” On the basis of psychometric testing, scores >“2” reflect people with limited health literacy in reading and comprehending written health information . Cancer Clinical Trial Perceptions Perceptions of cancer clinical trials were evaluated using 48 items developed by the primary investigators through formative work, reflecting domains of (1) awareness, (2) benefits, (3) concerns, (4) cancer and health care experiences, and (5) beliefs about health care providers and health . Response options were rated on an 11-point Likert scale ranging from 0 to 10 where “0” indicated strong disagreement and “10” indicated strong agreement. Item-level analyses were conducted in this study. Patient Activation in Cancer Care Patient activation for cancer care decision-making was measured with 10-item Decisional Engagement Scale . This instrument was developed specifically to understand patients’ level of involvement in their cancer care and engagement with active decision-making processes around treatment and care options . Response options were rated on an 11-point Likert scale, ranging from 0 to 10 where “0” meant “doesn’t describe you at all” and “10” meant “perfectly describes you.” In psychometric evaluation, the 10-item Decisional Engagement Scale has demonstrated strong factor structure, reliability, and concurrent validity with health-related quality of life, shared decision-making preferences, and clarity about cancer care preferences . Patient Self-Advocacy Patient self-advocacy was measured with 12-item Patient Self-Advocacy Scale . Response options were rated on an 11-point Likert scale, ranging from 0 to 10. In addition, 1 item (“I don’t get what I need from my physician because I am not assertive enough”) was reverse coded before calculating an average summary score. The scale has demonstrated good internal consistency, construct validity, and criterion validity . Health Care Self-Efficacy Self-efficacy to engage with health care providers was measured with 10-item Perceived Self-Efficacy in Patient-Physical Interactions scale . Items asked about confidence to do specific health care–related tasks, such as confidence to get a physician to listen to them, confidence in ability to know what questions to ask a physician, and confidence in ability to get a physician to take their health concerns seriously. Response options ranged from 1 to 5, where “1” indicated least confidence and “5” indicated most confidence . Decisional Conflict Decisional conflict about clinical trial participation was measured with 13-item Decisional Conflict scale proposed by O’Connor . Response options were rated on a 5-point Likert scale, ranging from 0 to 4 where “0” reflected “strongly agree” and “4” reflected “strongly disagree.” Scoring of 4 subscales (uncertainty, informed, value clarity, and decision support) was done by summing the items within the subscale, dividing by the number of items within that subscale, and multiplying by 25. This resulted in a score ranging from 0 to 100. A total score for all items was also calculated by summing all items, dividing by 13, and multiplying by 25. This, too, led to a total score ranging from 0 to 100. In psychometric testing, the scale had good discriminant validity between those who choose versus those who do not choose to engage in a health behavior. Other psychometric properties were determined to be acceptable . Clinical Trial Participation Intentions Intentions to participate in a cancer clinical trial were assessed with a single, modified item from the Choice Predisposition Scale proposed by O’Connor . The item read, “We would like to know what your opinion is about your cancer treatment options at present. When your doctor asks you to make a choice about treatment methods, please indicate how strongly you agree or disagree that you would choose to participate in a clinical trial, if offered.” Response options ranged from 0 to 10, where “0” indicated strongly disagree,” a “5” meant “neither agree nor disagree,” and “10” indicated “strongly agree.” This scale has good psychometric properties, such as high test-retest validity, good construct validity, high sensitivity to change, and discriminant validity .
The survey was developed using both validated instruments and study-related measures from formative work, including both qualitative interviews and surveys with Black/African American patients with cancer . Variables included in the present analyses were sociodemographic characteristics (ie, age, race, ethnicity, gender, income, educational attainment, insurance type, and cohabitation status), dichotomized race group (Black/African American vs non–Black/African American), clinical characteristics (ie, cancer stage and treatment status), general clinical trial knowledge, health literacy, cancer clinical trial perceptions (awareness, benefits, concerns, and cancer and health care experiences beliefs about health care providers and health), patient activation in cancer care, patient self-advocacy, self-efficacy in health care interactions, decisional conflict, and clinical trial intentions.
General knowledge of clinical trials was assessed using 16 revised items from Knowledge of Clinical Trials scale by Campbell et al . Response options were “true” or “false” and were scored for accuracy. Scores were generated using the percentage of questions answered correctly, ranging from 0% to 100%.
Health literacy was assessed with a single item from the Single Item Literacy Screener, which specifically identifies adults who may need assistance reading and understanding health materials . The item says, “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Response options were rated on a 5-point Likert scale, ranging from a score of “1” reflecting “never” to “5” reflecting “always.” On the basis of psychometric testing, scores >“2” reflect people with limited health literacy in reading and comprehending written health information .
Perceptions of cancer clinical trials were evaluated using 48 items developed by the primary investigators through formative work, reflecting domains of (1) awareness, (2) benefits, (3) concerns, (4) cancer and health care experiences, and (5) beliefs about health care providers and health . Response options were rated on an 11-point Likert scale ranging from 0 to 10 where “0” indicated strong disagreement and “10” indicated strong agreement. Item-level analyses were conducted in this study.
Patient activation for cancer care decision-making was measured with 10-item Decisional Engagement Scale . This instrument was developed specifically to understand patients’ level of involvement in their cancer care and engagement with active decision-making processes around treatment and care options . Response options were rated on an 11-point Likert scale, ranging from 0 to 10 where “0” meant “doesn’t describe you at all” and “10” meant “perfectly describes you.” In psychometric evaluation, the 10-item Decisional Engagement Scale has demonstrated strong factor structure, reliability, and concurrent validity with health-related quality of life, shared decision-making preferences, and clarity about cancer care preferences .
Patient self-advocacy was measured with 12-item Patient Self-Advocacy Scale . Response options were rated on an 11-point Likert scale, ranging from 0 to 10. In addition, 1 item (“I don’t get what I need from my physician because I am not assertive enough”) was reverse coded before calculating an average summary score. The scale has demonstrated good internal consistency, construct validity, and criterion validity .
Self-efficacy to engage with health care providers was measured with 10-item Perceived Self-Efficacy in Patient-Physical Interactions scale . Items asked about confidence to do specific health care–related tasks, such as confidence to get a physician to listen to them, confidence in ability to know what questions to ask a physician, and confidence in ability to get a physician to take their health concerns seriously. Response options ranged from 1 to 5, where “1” indicated least confidence and “5” indicated most confidence .
Decisional conflict about clinical trial participation was measured with 13-item Decisional Conflict scale proposed by O’Connor . Response options were rated on a 5-point Likert scale, ranging from 0 to 4 where “0” reflected “strongly agree” and “4” reflected “strongly disagree.” Scoring of 4 subscales (uncertainty, informed, value clarity, and decision support) was done by summing the items within the subscale, dividing by the number of items within that subscale, and multiplying by 25. This resulted in a score ranging from 0 to 100. A total score for all items was also calculated by summing all items, dividing by 13, and multiplying by 25. This, too, led to a total score ranging from 0 to 100. In psychometric testing, the scale had good discriminant validity between those who choose versus those who do not choose to engage in a health behavior. Other psychometric properties were determined to be acceptable .
Intentions to participate in a cancer clinical trial were assessed with a single, modified item from the Choice Predisposition Scale proposed by O’Connor . The item read, “We would like to know what your opinion is about your cancer treatment options at present. When your doctor asks you to make a choice about treatment methods, please indicate how strongly you agree or disagree that you would choose to participate in a clinical trial, if offered.” Response options ranged from 0 to 10, where “0” indicated strongly disagree,” a “5” meant “neither agree nor disagree,” and “10” indicated “strongly agree.” This scale has good psychometric properties, such as high test-retest validity, good construct validity, high sensitivity to change, and discriminant validity .
Prospective participants were screened for eligibility (aged ≥18 years, cancer diagnosis, receiving current or follow-up care, English speaking, and had not previously participated in a clinical trial). Participants provided verbal informed consent either in person or over the phone. Consent was verified via an e-consent using REDCap (Research Electronic Data Capture; Vanderbilt University), a web-based application developed to capture data for research . Consented patients were randomized to intervention conditions via REDCap and completed a baseline survey prior to viewing any intervention content. The baseline survey assessments were web-based and were conducted through REDCap. Patients could either complete the study at the hospital using a study iPad (Apple Inc) or at home on their own devices. The baseline survey took approximately 45 minutes.
Univariate statistics using means, SDs, and percentages are presented to characterize the participant sample. Differences in sociodemographic and clinical characteristics between dichotomous race groups (ie, Black/African American and non–Black/African American patients) were evaluated using chi-square tests of independence and independent sample 2-tailed t tests, as appropriate. Independent sample t tests were also used to examine for differences between Black/African American and non–Black/African American patients’ clinical trial knowledge, attitudes toward cancer clinical trials, and intentions to participate in a clinical trial. While some variables (eg, health literacy and self-efficacy in health care interactions) were highly skewed, t tests were still used as opposed to nonparametric testing because t tests are robust to skewed distributions when the sample size is >200 . Homogeneity of variances between groups was evaluated for each item before running independent samples t tests, and the appropriate t test assumptions were applied accordingly. All data analyses were conducted in StataSE (version 17.0; StataCorp).
The study protocol was approved by the Fox Chase Cancer Center’s institutional review board (#17-8013). All procedures involving human participants were in accordance with the ethical standards of the institutional or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. All participants provided verbal informed consent. Verification of consent with e-consent and all other study data were collected in REDCap, a secure web-based application developed to collect and store research data . To protect participants’ privacy, the data were coded before analysis using unique participant study identifiers and no direct identifiers were in the analytic data set. Participants were compensated US $25 for completing the baseline survey, educational intervention, and the posttest survey. However, this paper describes results from the baseline survey data only.
Overview compares sociodemographic and clinical characteristics by dichotomous race group. - show results of all remaining independent sample t tests for differences in average general clinical trials knowledge, health literacy, perceptions of cancer clinical trials, patient activation, patient advocacy, health care self-efficacy, decisional conflict, and clinical trial intentions by race group. Sociodemographic and Clinical Characteristics More than a third (95/244, 38.9%) of participants self-identified as Black/African American. Participants were aged a mean 60.89 (SD 10.24) years but did not vary by dichotomous race group. More than half (141/244, 57.8%) had at least some college or more, but educational attainment varied significantly between Black/African American and non–Black/African American participants ( P <.001). Moreover, 63.1% (154/244) of the sample included female participants, but a greater percentage of the Black/African American patients were female (70/95, 73%) compared to the non–Black/African American patients (84/149, 56.4%; P =.006). Other significant differences between groups were observed for insurance type (ie, a greater percentage of Black/African American patients on Medicare or Medicaid), annual household income (ie, higher household income reported by non–Black/African American patients), and treatment status (ie, greater percentage of Black/African American patients still receiving treatment as opposed to follow-up care compared with non–Black/African American patients). General Clinical Trials Knowledge and Health Literacy Compared to the Black/African American patients (mean 75.6, SD 12.7), the non–Black/African American patients (mean 80.7, SD 14.7) had significantly higher general clinical trial knowledge scores (t 242 =2.775; P =.006). Health literacy (greater values reflect lower health literacy) was also higher among non–Black/African American patients (mean 1.47, SD 0.72) than Black/African American patients (mean 2.06, SD 1.11; t 145.36 =−4.650; P <.001). Awareness of Cancer Clinical Trials Non-Black patients (mean 7.61, SD 3.33) were significantly more likely to have heard about clinical trials before their cancer diagnosis compared with Black/African American patients (mean 5.19, SD 3.96; t 238 =5.075; P <.001). However, non–Black/African American patients (mean 5.93, SD 3.55) felt more strongly than Black/African American patients (mean 4.55, SD 3.63) that they did not have sufficient information to decide whether to participate in a cancer clinical trial (t 238 =2.920; P =.004). There were no differences between groups on all other awareness-related items, including information gathering, support for accessing and consuming cancer-related health information, and receiving sufficient information about cancer clinical trials from their health care providers. Benefits of Clinical Trial Participation Black/African American patients consistently rated the benefits of cancer clinical trial participation lower than non–Black/African American patients. Specifically, Black/African American patients rated 10 out of 11 items about perceived benefits lower than non–Black/African American patients, all of which were statistically significant ( P values were .02, .03, .03, .02, .001, .02, <.001, .04, .007, and .04). Benefits of cancer clinical trial participation rated lower included having better survival odds, improving quality of life, increasing access to high-quality treatment, having a greater sense of purpose, and helping to find treatments and cures for family members or the public. In fact, the only benefits-related item that did not yield significant differences between groups at α=.05 level was belief that clinical trial participation would improve their community’s trust in medical research (“Being part of a clinical trial will improve my community’s trust in medical research”). Concerns of Clinical Trial Participation Concerns about cancer clinical trials that varied between racial groups were religious beliefs as barriers, fatalistic beliefs about cancer, and fears of receiving a placebo or sugar pill. Compared to non-Black patients, Black/African American patients with cancer were significantly more likely to believe that their religion or fatalistic beliefs (ie, “God has already decided what will happen so being part of a clinical trial would not help”) would keep them from participating in a clinical trial. However, non–Black/African American patients (mean 4.00, SD 3.66) were significantly more concerned than Black/African American patients (mean 2.72, SD 3.35) about potentially receiving a placebo and not real medicine (t 239 =2.750; P =.006). Cancer and Health Care Experiences Religious leaders were more strongly endorsed as a form of social support for Black/African American patients than non-Black patients. For example, non-Black patients (mean 5.29, SD 3.96) were less likely than Black/African American patients (mean 7.03, SD 3.88) to say they have a pastor or other religious leader that they trusted and could talk to (t 234 =–3.336; P =.001). However, non-Black patients (mean 7.27, SD 3.11) were more likely to report independently researching treatment options than Black/African American patients with cancer (mean 6.38, SD 3.55; t 237 =2.050; P =.04). In addition, non-Black patients (mean 7.58, SD 3.48) more strongly endorsed agreement with having family or close friends who had been diagnosed with cancer and who were successfully treated than Black/African American patients (mean 6.59, SD 3.90; t 236 =2.038; P =.04). Beliefs About Health and Health Care Providers Non-Black patients reported less frequent use of home remedies for medical care growing up than Black/African American patients (t 236 =–5.485; P <.001). In addition, 3 items of distrust of health care providers and medical mistrust were also endorsed more strongly by Black/African American patients (“I think that doctors mislead patients,” “I don’t trust medical researchers,” and “I believe racial/ethnic minorities are discriminated against in medical research studies”). However, ratings in both groups remained low and below a score of neutral (ie, “5”), reflecting overall low levels of medical mistrust in this sample. Patient Activation, Patient Self-Advocacy, and Health Care Self-Efficacy There were no significant differences in average patient activation in cancer care, patient self-advocacy, or health care self-efficacy between Black/African American and non–Black/African American patients (all P >.05). Decisional Conflict Of the 4 domains of decisional conflict, only certainty was significantly different between Black/African American and non–Black/African American patient groups. Black/African American patients with cancer (mean 25.62, SD 22.17) reported lower certainty in their clinical trial decision-making than non–Black/African American patients (mean 36.24, SD 25.76; t 237 =3.284; P =.001). The remaining 3 decisional conflict domains (informed, value clarity, and support) and summary decisional conflict score were nonsignificant between groups at the α=.05 level. Intentions to Participate in Clinical Trial, if Offered Intentions to participate in a cancer clinical trial, if offered, did not differ significantly between Black/African American patients (mean 6.38, SD 3.16) and non–Black/African American patients (mean 7.03, SD 2.60) at the α=.05 level (t 174.01 =1.662; P =.10).
compares sociodemographic and clinical characteristics by dichotomous race group. - show results of all remaining independent sample t tests for differences in average general clinical trials knowledge, health literacy, perceptions of cancer clinical trials, patient activation, patient advocacy, health care self-efficacy, decisional conflict, and clinical trial intentions by race group.
More than a third (95/244, 38.9%) of participants self-identified as Black/African American. Participants were aged a mean 60.89 (SD 10.24) years but did not vary by dichotomous race group. More than half (141/244, 57.8%) had at least some college or more, but educational attainment varied significantly between Black/African American and non–Black/African American participants ( P <.001). Moreover, 63.1% (154/244) of the sample included female participants, but a greater percentage of the Black/African American patients were female (70/95, 73%) compared to the non–Black/African American patients (84/149, 56.4%; P =.006). Other significant differences between groups were observed for insurance type (ie, a greater percentage of Black/African American patients on Medicare or Medicaid), annual household income (ie, higher household income reported by non–Black/African American patients), and treatment status (ie, greater percentage of Black/African American patients still receiving treatment as opposed to follow-up care compared with non–Black/African American patients).
Compared to the Black/African American patients (mean 75.6, SD 12.7), the non–Black/African American patients (mean 80.7, SD 14.7) had significantly higher general clinical trial knowledge scores (t 242 =2.775; P =.006). Health literacy (greater values reflect lower health literacy) was also higher among non–Black/African American patients (mean 1.47, SD 0.72) than Black/African American patients (mean 2.06, SD 1.11; t 145.36 =−4.650; P <.001).
Non-Black patients (mean 7.61, SD 3.33) were significantly more likely to have heard about clinical trials before their cancer diagnosis compared with Black/African American patients (mean 5.19, SD 3.96; t 238 =5.075; P <.001). However, non–Black/African American patients (mean 5.93, SD 3.55) felt more strongly than Black/African American patients (mean 4.55, SD 3.63) that they did not have sufficient information to decide whether to participate in a cancer clinical trial (t 238 =2.920; P =.004). There were no differences between groups on all other awareness-related items, including information gathering, support for accessing and consuming cancer-related health information, and receiving sufficient information about cancer clinical trials from their health care providers.
Black/African American patients consistently rated the benefits of cancer clinical trial participation lower than non–Black/African American patients. Specifically, Black/African American patients rated 10 out of 11 items about perceived benefits lower than non–Black/African American patients, all of which were statistically significant ( P values were .02, .03, .03, .02, .001, .02, <.001, .04, .007, and .04). Benefits of cancer clinical trial participation rated lower included having better survival odds, improving quality of life, increasing access to high-quality treatment, having a greater sense of purpose, and helping to find treatments and cures for family members or the public. In fact, the only benefits-related item that did not yield significant differences between groups at α=.05 level was belief that clinical trial participation would improve their community’s trust in medical research (“Being part of a clinical trial will improve my community’s trust in medical research”).
Concerns about cancer clinical trials that varied between racial groups were religious beliefs as barriers, fatalistic beliefs about cancer, and fears of receiving a placebo or sugar pill. Compared to non-Black patients, Black/African American patients with cancer were significantly more likely to believe that their religion or fatalistic beliefs (ie, “God has already decided what will happen so being part of a clinical trial would not help”) would keep them from participating in a clinical trial. However, non–Black/African American patients (mean 4.00, SD 3.66) were significantly more concerned than Black/African American patients (mean 2.72, SD 3.35) about potentially receiving a placebo and not real medicine (t 239 =2.750; P =.006).
Religious leaders were more strongly endorsed as a form of social support for Black/African American patients than non-Black patients. For example, non-Black patients (mean 5.29, SD 3.96) were less likely than Black/African American patients (mean 7.03, SD 3.88) to say they have a pastor or other religious leader that they trusted and could talk to (t 234 =–3.336; P =.001). However, non-Black patients (mean 7.27, SD 3.11) were more likely to report independently researching treatment options than Black/African American patients with cancer (mean 6.38, SD 3.55; t 237 =2.050; P =.04). In addition, non-Black patients (mean 7.58, SD 3.48) more strongly endorsed agreement with having family or close friends who had been diagnosed with cancer and who were successfully treated than Black/African American patients (mean 6.59, SD 3.90; t 236 =2.038; P =.04).
Non-Black patients reported less frequent use of home remedies for medical care growing up than Black/African American patients (t 236 =–5.485; P <.001). In addition, 3 items of distrust of health care providers and medical mistrust were also endorsed more strongly by Black/African American patients (“I think that doctors mislead patients,” “I don’t trust medical researchers,” and “I believe racial/ethnic minorities are discriminated against in medical research studies”). However, ratings in both groups remained low and below a score of neutral (ie, “5”), reflecting overall low levels of medical mistrust in this sample.
There were no significant differences in average patient activation in cancer care, patient self-advocacy, or health care self-efficacy between Black/African American and non–Black/African American patients (all P >.05).
Of the 4 domains of decisional conflict, only certainty was significantly different between Black/African American and non–Black/African American patient groups. Black/African American patients with cancer (mean 25.62, SD 22.17) reported lower certainty in their clinical trial decision-making than non–Black/African American patients (mean 36.24, SD 25.76; t 237 =3.284; P =.001). The remaining 3 decisional conflict domains (informed, value clarity, and support) and summary decisional conflict score were nonsignificant between groups at the α=.05 level.
Intentions to participate in a cancer clinical trial, if offered, did not differ significantly between Black/African American patients (mean 6.38, SD 3.16) and non–Black/African American patients (mean 7.03, SD 2.60) at the α=.05 level (t 174.01 =1.662; P =.10).
Principal Findings This analysis of baseline data from the mychoice randomized control study focused on patient perceptions regarding cancer clinical trials comparing Black/African American patients to non-Black patients. Some results are consistent with other research while also suggesting some unexpected findings that might shift the focus on how best to increase participation among Black/African American patients with cancer. Results indicate that addressing preparation for decision-making, community context, and the opportunity to reframe perceptions about interest in considering clinical trials are important constructs to target in efforts to reduce barriers to participation for Black/African American patients. Comparisons to Prior Work Clinical trial decision-making is complex. As suggested by Wenzel et al , the Model of Cancer Clinical Trial Decision-Making provides a framework to explore these findings from the patient perspective including information gathering, intrapersonal and interpersonal factors that influence the decision-making process, all of which ultimately impact decisional outcomes. Our findings suggest that there are differences at the start of the clinical trial decision-making process between Black/African American and non-Black patients. We found non–Black/African American patients had significantly higher levels of clinical trial knowledge, health literacy, and positive experiences with cancer outcomes, while Black/African American patients were less likely to hear about clinical trials before their diagnosis, creating inequities from the start. More challenging is combating the realities of later-stage disease at diagnosis and unequal oncology care in many communities of color where cancer outcomes are less positive . These findings are consistent with the current literature and highlight the need for more community education and awareness about clinical trials using plain language and health communication approaches appropriate for all levels of health literacy . As progress is made to address these inequities, it is important to emphasize these gains in our educational initiatives and share stories from survivors and clinical trial participants from these communities . Our study findings are also consistent with other research highlighting that the potential benefits of participation are less likely to resonate with Black patients, including the notion that participation is a benefit to their community. One factor is a higher level of level of general medical mistrust found in the Black/African American community , which is associated with expectations of lower care quality and poorer treatment experiences . Consistent with existing literature, Black/African American patients with cancer more frequently endorse fatalistic beliefs about the condition . As noted in the model proposed by Wenzel et al , increased fatalism is an important factor in this decision-making process. Addressing these deep-rooted beliefs and experiences requires deeper, authentic discussions with community leaders, providers, and other stakeholders. Religious leaders, specifically, can be messengers to balance these beliefs because they can play an important role in individuals’ decision-making process . To improve self-efficacy in cancer clinical trial decision-making and to improve clinical trial experiences overall, prior evidence-based recommendations have been made to establish long-term partnerships between not only the health care providers but also with other patients, patient advocates, researchers, clinical trial sponsors, and other community-based organizations (eg, faith-based groups and social services organizations) as well as to form community advisory boards . We found few differences in facilitators to clinical trial participation by race. Indeed, patients reported that they were confident in gathering support, trusted their physicians, and could get information from their physicians about clinical trials. Although general mistrust was more prevalent in Black/African American patients, their trust in their physician and their ability to get information about clinical trials was similar to non-Black patients. This was a much more nuanced view of medical mistrust and may vary significantly among Black/African American patients, depending on a range of sociodemographic factors and life experiences. In addition, it is important to note that general mistrust might be mitigated by the providers providing direct care, which could include providers from a variety of specialties and primary care. Therefore, initiatives and interventions to educate a broad range of providers about clinical trials and emphasize their role in this decision-making process are essential to increasing participation. An unexpected finding was that non-Black patients reported higher levels of concerns about receiving a placebo and felt they did not have sufficient information to decide about participation. This may be related to their higher levels of clinical trial knowledge that might initially raise more questions and concerns, recognizing the complexity of the process. As more comprehensive education is conducted in Black/African American communities, we might expect that these will be issues that need to be specifically addressed. Perhaps most importantly, there were no differences between Black/African American and non-Black patients in their intention to participate if offered a clinical trial. This was true despite having found important differences in perceived barriers to participation by race. However, provider and system barriers may impact the ability of patients to turn intention into decision-making and participation. If a trial were available and yet not offered, there is an unwarranted bias that they would not be interested. If a trial is not available, then there is no decision to make. This expands the Wenzel model beyond the patient , focusing on the multilevel influences on this decision-making process. Future research could include both the mychoice patient tool and provider training and interventions to increase cultural competency and change the knowledge and attitudes of providers and study staff, as well as providing culturally tailored education initiatives to increase education and awareness of clinical trials among racial and ethnic minoritized populations . Our own work developing the mychoice web-based tool to assist diverse patients in the decision-making process serves as an example . Future Directions We recognize that patients’ knowledge, attitudes, and interest in clinical trial participation are only one facet of this complex process. Availability of clinical trials in local settings, systematic barriers to care, language and cultural barriers, provider attitudes, and trial eligibility requirements all must be addressed as well. To date, many programs and interventions have been implemented at multiple levels or at the organization or systems levels to address systemic factors that drive the continued underrepresentation of people from racial and ethnic minoritized groups in research. For example, 1 system-level approach is the creation of the US Cancer Centers of Excellence and an inventory of successful strategies for increased inclusion of people from racial and ethnic minoritized groups in clinical trials . Specifically, leaders from 8 US cancer centers met to determine best practices for increasing enrollment and retention of clinical trial participants from racial and ethnic minoritized groups. Topics discussed included hiring practices; cultural changes in research organizations; and education or training on equity, diversity, and inclusion among people who study and work in cancer clinical trials . These changes are important because patient-provider identity concordance can motivate greater interpersonal trust, cancer care engagement, and care quality , yet Black oncologists remain significantly underrepresented within the health care workforce, with Black oncologists making up only 3% of all oncologists in the United States as of 2021 . Finally, studies should also publish data more frequently on the racial and ethnic composition of their study participants in their published clinical trial reports and in registry results . While applicable to public health and medical fields beyond oncology, increased transparency about the demographic composition of clinical trials will assist with monitoring of diversity, equity and inclusion progress and support future meta-analytic research. For example, among the 197 precision oncology clinical trials in the United States from 2004 to 2017 reported on ClinicalTrials.gov, fewer than half (n=97, 49.2%) provided race or ethnicity data . Similarly, recent systematic reviews found that only 57% of the 155 head and neck cancer clinical trials between 2010 and 2020 and only 4.4% of the 544 bladder cancer clinical trials published between 1970 and 2020 had race or ethnicity demographic data . Limitations This study has several limitations. First, this was a cross-sectional analysis that limits inferences to causality. Second, generalizability is limited to people already receiving care for cancer. This is noteworthy because cancer disparities exist before this point (ie, detection, treatment provision, etc), meaning that there may be different beliefs and attitudes associated with patients who have not engaged with cancer treatment services. This may also limit generalization to some specific patient populations, such as recent immigrants, without adequate health insurance and health care access. Moreover, this was a baseline sample of patients diagnosed with cancer recruited from cancer treatment centers for an RCT. Thus, this sample of participants likely already has higher acceptance of clinical trials because they had already consented to be in a behavioral trial. In addition, results also suggest that these participants may have higher acceptance of Western medicine and health care providers because they are already receiving care at a cancer treatment center. This sample reported low levels of health care provider medical mistrust and few reports of negative health care experiences across both the Black/African American and non-Black groups, which is likely not representative of the US adult cancer population, especially Black adults . While social desirability bias can contribute to underreporting of negative health care experiences and other negative health care attitudes and beliefs, the web-based, self-administered survey format may have mitigated the extent to which social desirability bias could have impacted the validity of participant responses. Another potential limitation is that these analyses did not control for multiple comparisons made on the same data set. While some researchers suggest using the Bonferroni adjustment to control for the possibility of finding false positives when making multiple comparisons, there is criticism of its unilateral use in multiple comparison studies . That said, there remains some potential for inflated type 1 error (ie, false positives) given the number of hypotheses tested. Finally, there are also additional barriers to cancer clinical trial participation that are not accounted for in the present analysis. For example, older age, insurance type (ie, Medicaid and uninsured vs private insurance), greater medical comorbidities, and greater distance to treatment are associated with lower rates of clinical trial participation and high-quality, guideline-concordant cancer care . Thus, covariate-adjusted analysis methods should be considered for subsequent work. Conclusions The findings from the baseline survey of the mychoice randomized trial highlight that although clinical trial participation among diverse populations remains low, there were no significant differences in interest in clinical trials, and trust in individual providers was high in both Black/African American and non-Black patients with cancer. However, persistent beliefs about barriers to and benefits of participation in clinical trials exist. Our findings suggest that we need more outreach, discussion, and introduction of clinical trials to diverse oncology patients who may be more interested than presumed. This does not preclude the considerable work that needs to be done to address access to clinical trials and addressing the systemic barriers to participation. Importantly, the findings from this study suggest that current interventions have not significantly moved the needle in broadening the appeal of clinical trials in Black/African American patients with cancer, and further work in effectively increasing participation rates is still needed.
This analysis of baseline data from the mychoice randomized control study focused on patient perceptions regarding cancer clinical trials comparing Black/African American patients to non-Black patients. Some results are consistent with other research while also suggesting some unexpected findings that might shift the focus on how best to increase participation among Black/African American patients with cancer. Results indicate that addressing preparation for decision-making, community context, and the opportunity to reframe perceptions about interest in considering clinical trials are important constructs to target in efforts to reduce barriers to participation for Black/African American patients.
Clinical trial decision-making is complex. As suggested by Wenzel et al , the Model of Cancer Clinical Trial Decision-Making provides a framework to explore these findings from the patient perspective including information gathering, intrapersonal and interpersonal factors that influence the decision-making process, all of which ultimately impact decisional outcomes. Our findings suggest that there are differences at the start of the clinical trial decision-making process between Black/African American and non-Black patients. We found non–Black/African American patients had significantly higher levels of clinical trial knowledge, health literacy, and positive experiences with cancer outcomes, while Black/African American patients were less likely to hear about clinical trials before their diagnosis, creating inequities from the start. More challenging is combating the realities of later-stage disease at diagnosis and unequal oncology care in many communities of color where cancer outcomes are less positive . These findings are consistent with the current literature and highlight the need for more community education and awareness about clinical trials using plain language and health communication approaches appropriate for all levels of health literacy . As progress is made to address these inequities, it is important to emphasize these gains in our educational initiatives and share stories from survivors and clinical trial participants from these communities . Our study findings are also consistent with other research highlighting that the potential benefits of participation are less likely to resonate with Black patients, including the notion that participation is a benefit to their community. One factor is a higher level of level of general medical mistrust found in the Black/African American community , which is associated with expectations of lower care quality and poorer treatment experiences . Consistent with existing literature, Black/African American patients with cancer more frequently endorse fatalistic beliefs about the condition . As noted in the model proposed by Wenzel et al , increased fatalism is an important factor in this decision-making process. Addressing these deep-rooted beliefs and experiences requires deeper, authentic discussions with community leaders, providers, and other stakeholders. Religious leaders, specifically, can be messengers to balance these beliefs because they can play an important role in individuals’ decision-making process . To improve self-efficacy in cancer clinical trial decision-making and to improve clinical trial experiences overall, prior evidence-based recommendations have been made to establish long-term partnerships between not only the health care providers but also with other patients, patient advocates, researchers, clinical trial sponsors, and other community-based organizations (eg, faith-based groups and social services organizations) as well as to form community advisory boards . We found few differences in facilitators to clinical trial participation by race. Indeed, patients reported that they were confident in gathering support, trusted their physicians, and could get information from their physicians about clinical trials. Although general mistrust was more prevalent in Black/African American patients, their trust in their physician and their ability to get information about clinical trials was similar to non-Black patients. This was a much more nuanced view of medical mistrust and may vary significantly among Black/African American patients, depending on a range of sociodemographic factors and life experiences. In addition, it is important to note that general mistrust might be mitigated by the providers providing direct care, which could include providers from a variety of specialties and primary care. Therefore, initiatives and interventions to educate a broad range of providers about clinical trials and emphasize their role in this decision-making process are essential to increasing participation. An unexpected finding was that non-Black patients reported higher levels of concerns about receiving a placebo and felt they did not have sufficient information to decide about participation. This may be related to their higher levels of clinical trial knowledge that might initially raise more questions and concerns, recognizing the complexity of the process. As more comprehensive education is conducted in Black/African American communities, we might expect that these will be issues that need to be specifically addressed. Perhaps most importantly, there were no differences between Black/African American and non-Black patients in their intention to participate if offered a clinical trial. This was true despite having found important differences in perceived barriers to participation by race. However, provider and system barriers may impact the ability of patients to turn intention into decision-making and participation. If a trial were available and yet not offered, there is an unwarranted bias that they would not be interested. If a trial is not available, then there is no decision to make. This expands the Wenzel model beyond the patient , focusing on the multilevel influences on this decision-making process. Future research could include both the mychoice patient tool and provider training and interventions to increase cultural competency and change the knowledge and attitudes of providers and study staff, as well as providing culturally tailored education initiatives to increase education and awareness of clinical trials among racial and ethnic minoritized populations . Our own work developing the mychoice web-based tool to assist diverse patients in the decision-making process serves as an example .
We recognize that patients’ knowledge, attitudes, and interest in clinical trial participation are only one facet of this complex process. Availability of clinical trials in local settings, systematic barriers to care, language and cultural barriers, provider attitudes, and trial eligibility requirements all must be addressed as well. To date, many programs and interventions have been implemented at multiple levels or at the organization or systems levels to address systemic factors that drive the continued underrepresentation of people from racial and ethnic minoritized groups in research. For example, 1 system-level approach is the creation of the US Cancer Centers of Excellence and an inventory of successful strategies for increased inclusion of people from racial and ethnic minoritized groups in clinical trials . Specifically, leaders from 8 US cancer centers met to determine best practices for increasing enrollment and retention of clinical trial participants from racial and ethnic minoritized groups. Topics discussed included hiring practices; cultural changes in research organizations; and education or training on equity, diversity, and inclusion among people who study and work in cancer clinical trials . These changes are important because patient-provider identity concordance can motivate greater interpersonal trust, cancer care engagement, and care quality , yet Black oncologists remain significantly underrepresented within the health care workforce, with Black oncologists making up only 3% of all oncologists in the United States as of 2021 . Finally, studies should also publish data more frequently on the racial and ethnic composition of their study participants in their published clinical trial reports and in registry results . While applicable to public health and medical fields beyond oncology, increased transparency about the demographic composition of clinical trials will assist with monitoring of diversity, equity and inclusion progress and support future meta-analytic research. For example, among the 197 precision oncology clinical trials in the United States from 2004 to 2017 reported on ClinicalTrials.gov, fewer than half (n=97, 49.2%) provided race or ethnicity data . Similarly, recent systematic reviews found that only 57% of the 155 head and neck cancer clinical trials between 2010 and 2020 and only 4.4% of the 544 bladder cancer clinical trials published between 1970 and 2020 had race or ethnicity demographic data .
This study has several limitations. First, this was a cross-sectional analysis that limits inferences to causality. Second, generalizability is limited to people already receiving care for cancer. This is noteworthy because cancer disparities exist before this point (ie, detection, treatment provision, etc), meaning that there may be different beliefs and attitudes associated with patients who have not engaged with cancer treatment services. This may also limit generalization to some specific patient populations, such as recent immigrants, without adequate health insurance and health care access. Moreover, this was a baseline sample of patients diagnosed with cancer recruited from cancer treatment centers for an RCT. Thus, this sample of participants likely already has higher acceptance of clinical trials because they had already consented to be in a behavioral trial. In addition, results also suggest that these participants may have higher acceptance of Western medicine and health care providers because they are already receiving care at a cancer treatment center. This sample reported low levels of health care provider medical mistrust and few reports of negative health care experiences across both the Black/African American and non-Black groups, which is likely not representative of the US adult cancer population, especially Black adults . While social desirability bias can contribute to underreporting of negative health care experiences and other negative health care attitudes and beliefs, the web-based, self-administered survey format may have mitigated the extent to which social desirability bias could have impacted the validity of participant responses. Another potential limitation is that these analyses did not control for multiple comparisons made on the same data set. While some researchers suggest using the Bonferroni adjustment to control for the possibility of finding false positives when making multiple comparisons, there is criticism of its unilateral use in multiple comparison studies . That said, there remains some potential for inflated type 1 error (ie, false positives) given the number of hypotheses tested. Finally, there are also additional barriers to cancer clinical trial participation that are not accounted for in the present analysis. For example, older age, insurance type (ie, Medicaid and uninsured vs private insurance), greater medical comorbidities, and greater distance to treatment are associated with lower rates of clinical trial participation and high-quality, guideline-concordant cancer care . Thus, covariate-adjusted analysis methods should be considered for subsequent work.
The findings from the baseline survey of the mychoice randomized trial highlight that although clinical trial participation among diverse populations remains low, there were no significant differences in interest in clinical trials, and trust in individual providers was high in both Black/African American and non-Black patients with cancer. However, persistent beliefs about barriers to and benefits of participation in clinical trials exist. Our findings suggest that we need more outreach, discussion, and introduction of clinical trials to diverse oncology patients who may be more interested than presumed. This does not preclude the considerable work that needs to be done to address access to clinical trials and addressing the systemic barriers to participation. Importantly, the findings from this study suggest that current interventions have not significantly moved the needle in broadening the appeal of clinical trials in Black/African American patients with cancer, and further work in effectively increasing participation rates is still needed.
|
Opportunities and challenges for scientific exchange in the Baltic Sea region: Lessons from the Baltic conferences on obstetrics and gynecology (1987–2001) | 313aff31-2159-466d-9382-3ac6132ac130 | 9564686 | Gynaecology[mh] | Currently, research about how the circulation of knowledge in medicine played out during the Cold War is conducted by the “Bridging the Baltic network”, which consists of around twenty historians of medicine and Cold War historians in the Baltic Sea region. The network is funded by the German Research Foundation (PI: Nils Hansson). |
Performance and acute procedural outcomes of the EnSite Precision™ cardiac mapping system for electrophysiology mapping and ablation procedures: results from the EnSite Precision™ observational study | 351f54b9-03b1-4564-ac67-488360f10be5 | 9550718 | Physiology[mh] | Electroanatomic mapping (EAM) has become an essential tool for effective ablation of cardiac arrhythmias with continued advancements in three-dimensional (3D) catheter tracking and high-resolution visualization to account for the increased case complexity and duration . Available since 2016, the EnSite Precision™ cardiac mapping system (Abbott, St. Paul, Min) uses a hybrid impedance and magnetic field technology that displays the 3D position of conventional and sensor-enabled electrophysiology catheters, as well as generating high-density automated 3D electroanatomic maps that aim to improve success in complex ablation procedures . The EnSite Precision™ Observational Study was designed to quantify and characterize the use of the EnSite Precision™ cardiac mapping system for mapping and ablation of cardiac arrhythmias in a real-world environment and evaluate procedural and subsequent clinical outcomes. In this report, we describe the performance of the system including mapping stability, mapping times, points collected, fluoroscopy times, and acute procedural success.
Study design The EnSite Precision™ Observational Study (NCT-03260244) was designed to quantify and characterize the use of the EnSite Precision™ cardiac mapping system for mapping and ablation of cardiac arrhythmias in a real-world environment and evaluate procedural and subsequent clinical outcomes. There was no randomization or blinding in this study. The study was designed and sponsored by Abbott Laboratories and approved by the appropriate Institutional Review Board or Ethics Committee at each site. Data monitoring, collection, and primary data analysis were performed by the sponsor in partnership with the publication committee. This clinical study was conducted in accordance with Abbott Standard Operating Procedures, ethical principles based on the Declaration of Helsinki, Good Clinical Practice, ISO 14155, and FDA 21 CFR 50, 54, 56, and 812. Study population Eligible subjects were adults undergoing a cardiac electrophysiology mapping and radiofrequency (RF) ablation procedure using the EnSite Precision™ System. Subject enrollment in the EnSite Precision Observational Study began on September 12, 2017. The last subject was enrolled on December 6, 2018. The study enrolled 1065 subjects at 38 clinical sites in the USA and Canada. A subject was considered enrolled in the clinical study from the moment the subject provided a written informed consent. Subjects were followed for 12-months post procedure for arrhythmia recurrence, medication use, and quality-of-life changes. The last follow-up visit was completed on January 17, 2020. Patients with atrioventricular nodal reentrant tachycardia (AVNRT) or atrioventricular reentrant tachycardia (AVRT) as the only presenting rhythm and patients with planned cryoablation procedure were excluded. A full list of inclusion and exclusion criteria is included in Supplemental Table . Study procedures Subjects underwent a cardiac mapping and RF ablation procedure using the EnSite Precision™ cardiac mapping system per standard practice of the operating physician. Subjects were prepared according to the standard ablation procedures and standard practice of the center. All devices had proper regulatory clearance and were used according to their IFU, including anticoagulation and activated clotting time therapeutic requirements for multi-electrode catheters. Procedure data collection included overall procedure (first catheter in, to last catheter out), fluoroscopy, and mapping times. EnSite NavX Surface Electrode (NavX patch) placement and associated skin preparation were recorded. Documented mapping characteristics included times to create and edit the initial map; number of mapping points collected, used, and edited; and EnSite™ Automap and AutoMark module software settings used. Editing included reannotating or deleting previously collected points. Time spent “shaving” was not specifically captured. The number of gaps in lesions identified that required further ablation (touch-ups) was also recorded. Operators were asked to note whether the mapping system was stable throughout the procedure (based on the opinion of the operator) and any factors affecting system stability. Mapping efficiency for a given map was characterized as the number of used points divided by the mapping time in minutes, resulting in used points per minute. Study outcomes Acute success was defined by the operator based upon their standard pre-defined endpoints for each type of procedure. Adverse events were not collected during this clinical study. Long-term follow-up up to 12-months will be reported in a separate analysis. Any complaints were managed via the sponsor’s standard post-market surveillance process. Statistical analysis Continuous variables are summarized with number of observations, mean, standard deviation, min and max values, or median and interquartile range (IQR). Categorical variables are summarized with patient counts and percentages. All data available among the analysis population was used. Missing data was not imputed. No formal sample size calculation was performed. All analyses were performed using SAS software version 9.4 (SAS Institute Inc, Cary, NC, USA). The p -values presented are 2-sided, and p < 0.05 (not adjusted for multiplicity) was considered statistically significant.
The EnSite Precision™ Observational Study (NCT-03260244) was designed to quantify and characterize the use of the EnSite Precision™ cardiac mapping system for mapping and ablation of cardiac arrhythmias in a real-world environment and evaluate procedural and subsequent clinical outcomes. There was no randomization or blinding in this study. The study was designed and sponsored by Abbott Laboratories and approved by the appropriate Institutional Review Board or Ethics Committee at each site. Data monitoring, collection, and primary data analysis were performed by the sponsor in partnership with the publication committee. This clinical study was conducted in accordance with Abbott Standard Operating Procedures, ethical principles based on the Declaration of Helsinki, Good Clinical Practice, ISO 14155, and FDA 21 CFR 50, 54, 56, and 812.
Eligible subjects were adults undergoing a cardiac electrophysiology mapping and radiofrequency (RF) ablation procedure using the EnSite Precision™ System. Subject enrollment in the EnSite Precision Observational Study began on September 12, 2017. The last subject was enrolled on December 6, 2018. The study enrolled 1065 subjects at 38 clinical sites in the USA and Canada. A subject was considered enrolled in the clinical study from the moment the subject provided a written informed consent. Subjects were followed for 12-months post procedure for arrhythmia recurrence, medication use, and quality-of-life changes. The last follow-up visit was completed on January 17, 2020. Patients with atrioventricular nodal reentrant tachycardia (AVNRT) or atrioventricular reentrant tachycardia (AVRT) as the only presenting rhythm and patients with planned cryoablation procedure were excluded. A full list of inclusion and exclusion criteria is included in Supplemental Table .
Subjects underwent a cardiac mapping and RF ablation procedure using the EnSite Precision™ cardiac mapping system per standard practice of the operating physician. Subjects were prepared according to the standard ablation procedures and standard practice of the center. All devices had proper regulatory clearance and were used according to their IFU, including anticoagulation and activated clotting time therapeutic requirements for multi-electrode catheters. Procedure data collection included overall procedure (first catheter in, to last catheter out), fluoroscopy, and mapping times. EnSite NavX Surface Electrode (NavX patch) placement and associated skin preparation were recorded. Documented mapping characteristics included times to create and edit the initial map; number of mapping points collected, used, and edited; and EnSite™ Automap and AutoMark module software settings used. Editing included reannotating or deleting previously collected points. Time spent “shaving” was not specifically captured. The number of gaps in lesions identified that required further ablation (touch-ups) was also recorded. Operators were asked to note whether the mapping system was stable throughout the procedure (based on the opinion of the operator) and any factors affecting system stability. Mapping efficiency for a given map was characterized as the number of used points divided by the mapping time in minutes, resulting in used points per minute.
Acute success was defined by the operator based upon their standard pre-defined endpoints for each type of procedure. Adverse events were not collected during this clinical study. Long-term follow-up up to 12-months will be reported in a separate analysis. Any complaints were managed via the sponsor’s standard post-market surveillance process.
Continuous variables are summarized with number of observations, mean, standard deviation, min and max values, or median and interquartile range (IQR). Categorical variables are summarized with patient counts and percentages. All data available among the analysis population was used. Missing data was not imputed. No formal sample size calculation was performed. All analyses were performed using SAS software version 9.4 (SAS Institute Inc, Cary, NC, USA). The p -values presented are 2-sided, and p < 0.05 (not adjusted for multiplicity) was considered statistically significant.
Enrollment and analysis population Of the 1065 enrolled subjects, 1053 met all inclusion/exclusion criteria. Of these, 69 were excluded due to a primary indication of persistent AF in the USA that was treated off-label. A total of 45 subjects withdrew prior to the procedure and an additional 14 subjects did not have an eligible procedure (no RF ablation performed). The final cohort of 925 subjects stratified by primary indication for ablation include the following: AF (primary indication [PI]-AF, 46.5%, 430/925), AFL (PI-AFL, 48.1%, 445/925), or Other (PI-O, 5.4%, 50/925), as demonstrated in Fig. . The PI-O cohort included 18 supraventricular tachycardia (SVT), 11 atrial tachycardia, 9 premature ventricular contraction (PVC), 6 ventricular tachycardia, and 6 Wolff-Parkinson-White Syndrome patients. Additional lesions were common in the PI-AF cohort, including cavotriscupid isthmus (CTI) ablation (34.7%), lines excluding CTI (8.8%), complex fractionated atrial electrograms (4.0%), and rotor ablation (2.3%), as shown in Supplemental Table . Baseline characteristics of the cohort are summarized in Table . The mean age was 64.3 ± 11.6 years, 646 (69.8%) were male, and mean body mass index was 30.9 ± 7.4 kg/m 2 . The majority (84.2%, 779/925) did not have an implantable cardiac device at the time of procedure. Mean left ventricular ejection fraction was 54.2 ± 11.9. The most prevalent cardiovascular diseases included history of hypertension (62.8%, 581/925), valvular heart disease (27.9%, 258/925), coronary artery disease (25.1%, 232/925), and history of diabetes (22.4%, 207/925). Of the 430 PI-AF subjects, 102 (23.7%) had a prior ablation for AF. Only 40 (9.0%) of 445 PI-AFL subjects had a prior ablation for AFL. EnSite NavX patch placement The standard patch kit (EnSite Precision NavX™ SE Patch Kit, Model EN0020-P) was used for almost all subjects in the analysis population (99.7%, 922/925) with older models used for the remaining three subjects. Patch size was reported to be appropriate for all but one subject. Standard patch placements were used in > 99% of subjects for all but the neck (91.5%, 845/924) and left leg (83.7%, 773/924) patches. Placement of the system reference surface electrode varied, with most placed on the lower back (74.5%, 689/925), followed by the upper back (15.0%, 139/925), abdomen (7.1%, 66/925), or other location (3.4%, 31/925). Positional reference sensors needed to be manually removed or repositioned in 2.8% (26/924) of subjects. Primary mapping catheter For PI-AF subjects, Advisor™ FL Sensor Enabled™ (33.2%, 128/385), Advisor™ HD Grid 23.4%, 90/385), and Reflexion Spiral™ (18.2%, 70/385) were used most often as the primary mapping catheter. In contrast, an ablation catheter (53.9%, 174/323) or other linear catheter (34.4%, 111/323) was used most often for mapping in PI-AFL subjects. In PI-O subjects, Advisor™ HD Grid (23.9%, 11/46) and Tacticath™ (Quartz or SE, 21.7%, 10/46) were used most frequently as the primary mapping catheter. Mapping characteristics Mapping was required in 81.5% (754/925) of subjects, with OneMap (simultaneous model and electroanatomic map creation) used in 96.3% (726/754) of subjects. AutoMap was used to create the initial map in 81.8% (315/385) of PI-AF subjects, 55.7% (180/323) of PI-AFL subjects, and 28.3% (13/46) of PI-O subjects, and a combination of AutoMap and manual mapping was also used in 10.6% (41/385), 24.1% (78/323), and 32.6% (15/46) of subjects in each cohort, respectively. Sinus rhythm (56.1%, 216/385), AFL (58.2%, 188/323), and sinus rhythm (26.1%, 12/46) were the most frequent cardiac rhythms during initial map creation in each cohort, respectively (Fig. ). Among all subjects in the analysis population, local activation time (52.0%, 392/754) and peak-to-peak voltage (46.3%, 349/754) were the predominant initial map type configurations. The low-voltage ID feature was often used in the initial maps (74.5%, 562/754). Median time to create and edit initial map was 8.6 (IQR 4.7–15.0) and 1.0 (IQR 1.0–2.0) minutes, respectively. Only 335/925 (36.2%) required editing and 66.0% (221/335) of those required editing of fewer than 10 points. Median number of mapping points collected and used was 1752.5 and 811.0, respectively. Table summarizes mapping time and point collection characteristics for the initial maps created, stratified by primary indication cohort. In addition to 754 initial maps, 579 additional maps were created for a total of 1333 maps. Median time to create and edit any map was 6.0 (IQR 3.0–12.0) and 1.0 (IQR 0.5–2.0) minutes, respectively. Editing of the map was not required or not applicable for most maps created (54.4%, 722/1327), and 30.2% (401/1327) required editing of fewer than 10 points. Median number of mapping points collected and used was 933.0 and 415.0, respectively. Average mapping efficiency for maps created with AutoMap or TurboMap was 164.9 ± 365.7 used points per minute ( n = 930 maps), which was significantly greater compared to 21.8 ± 30.3 used points per minute ( n = 374 maps) for manual mapping alone ( p < 0.001). Table summarizes mapping time and point collection characteristics for all maps created (both initial and additional), stratified by primary indication cohort. Furthermore, Supplemental Table describes the differences in points taken, mapping time, fluoroscopy time, and procedure time by each mapping catheter used stratified by PI-AF and PI-AFL. Redo AF ablation consisted of 40/430 (23.7%) of the PI-AF cohort and redo AFL ablation consisted of 40/445 (9.8%) of the PI-AFL cohort as shown in Supplemental Table . As compared to de novo procedures, time creating initial map was significantly longer and RF time was shorter in redo AF procedures. As compared to de novo AFL procedures, those undergoing redo ablations had a longer procedure time, although no difference in time creating initial map, RF time, or fluoroscopy time. System stability The EnSite Precision™ System was stable throughout 79.8% (738/925) of procedures. Baseline patient and procedural characteristics, including fluoroscopy and procedure times were significantly longer when the mapping system was not stable throughout the procedure (Supplemental Table ). As displayed in Table , the most common factors affecting system stability were respiratory change (43.9%, 82/187), subject movement (38.0%, 71/187), and coronary sinus (CS) positional reference dislodgement (17.1%, 32/187). There were 49 “Other, specify” responses, the most frequent being Blood Pressure or Hemodynamic change (17/49) and Unknown cause (8/49). General anesthesia was used in most cases (63.0%, 583/925) with jet ventilation utilized in 10.1% (59/583). As compared to general anesthesia, those undergoing conscious sedation were more likely to have respiratory changes (4.3% vs 15.8%, p < 0.001), patient movement (12.7% vs 4.5%, p < 0.001), and no difference in CS positional reference dislodgement (3.3% vs 5.8%, p = 0.15). Between JET ventilation and non-JET ventilation, there were no differences in respiratory changes (3.4% vs 4.4%, p = 1.0), patient movement (3.4% vs 4.6%, p = 1.0), or CS positional reference dislodgement (0% vs 3.6%, p = 0.24). AutoMark settings AutoMark usage data were submitted for 755 (81.6%) of subjects. The available choices for the lesion color and size metrics are different in Canada and the USA (US); therefore, these settings are summarized by country in Fig. . In Canada, Force Time Integral (FTI) and Lesion Index (LSI) were available metric choices, while they were not available in the USA at that time. In Canada, the most frequently used metrics for lesion color were LSI (54.0%, 87/161), Time (25.5%, 41/161), and Average Force (14.9%, 24/161); and the most frequently used metrics for lesion size were FTI (59.5%, 47/79), LSI (19.0%, 15/79), and Time (17.7%, 14/79). In the USA, the most frequently used metrics for lesion color were Impedance Drop (42.4%, 248/585), Time (24.4%, 143/585), Average Force (14.4%, 84/585), and Impedance Drop Percent (14.0%, 82/585); and the most frequently used metrics for lesion size were Time (36.4%, 139/382), Impedance Drop Percent (26.2%, 100/382), and Impedance Drop (18.6%, 71/382). Procedural characteristics Table summarizes acute procedural success rates, endpoints achieved, and additional procedural characteristics. Acute success was reached based on the pre-defined endpoints for the procedure in 97.4% (901/925) of cases. Median overall procedure time (first catheter in to last catheter out) was 101.0 (IQR 59.0–152.0) minutes for all subjects, with median times within each cohort of 140.5, 59.0, and 127.0 min for PI-AFL, PI-AF, and PI-O, respectively. Among subjects with AutoMark data, an average RF power greater than 40 W was used in 10.3% (71/690) of subjects and ≥ 50 W in 17/690 (2.4%), suggesting high power short duration ablation technique may have been used in these subjects. Fluoroscopy was used for most but not all subjects (87.7%, 811/925), with the lowest proportion of fluoroscopy use in the PI-AFL cohort (83.8%, 373/445). Among procedures where fluoroscopy was used, median fluoroscopy time was 11.0 (IQR 6.0–18.0) minutes. Gaps in lesions requiring touch-up ablation were identified in 42.9% (395/921) of subjects. Median number of gaps identified was 2.0 (IQR 1.0–4.0) and AutoMark assisted in identifying the gaps in a majority of subjects with identified gaps (70.6%, 218/309).
Of the 1065 enrolled subjects, 1053 met all inclusion/exclusion criteria. Of these, 69 were excluded due to a primary indication of persistent AF in the USA that was treated off-label. A total of 45 subjects withdrew prior to the procedure and an additional 14 subjects did not have an eligible procedure (no RF ablation performed). The final cohort of 925 subjects stratified by primary indication for ablation include the following: AF (primary indication [PI]-AF, 46.5%, 430/925), AFL (PI-AFL, 48.1%, 445/925), or Other (PI-O, 5.4%, 50/925), as demonstrated in Fig. . The PI-O cohort included 18 supraventricular tachycardia (SVT), 11 atrial tachycardia, 9 premature ventricular contraction (PVC), 6 ventricular tachycardia, and 6 Wolff-Parkinson-White Syndrome patients. Additional lesions were common in the PI-AF cohort, including cavotriscupid isthmus (CTI) ablation (34.7%), lines excluding CTI (8.8%), complex fractionated atrial electrograms (4.0%), and rotor ablation (2.3%), as shown in Supplemental Table . Baseline characteristics of the cohort are summarized in Table . The mean age was 64.3 ± 11.6 years, 646 (69.8%) were male, and mean body mass index was 30.9 ± 7.4 kg/m 2 . The majority (84.2%, 779/925) did not have an implantable cardiac device at the time of procedure. Mean left ventricular ejection fraction was 54.2 ± 11.9. The most prevalent cardiovascular diseases included history of hypertension (62.8%, 581/925), valvular heart disease (27.9%, 258/925), coronary artery disease (25.1%, 232/925), and history of diabetes (22.4%, 207/925). Of the 430 PI-AF subjects, 102 (23.7%) had a prior ablation for AF. Only 40 (9.0%) of 445 PI-AFL subjects had a prior ablation for AFL.
The standard patch kit (EnSite Precision NavX™ SE Patch Kit, Model EN0020-P) was used for almost all subjects in the analysis population (99.7%, 922/925) with older models used for the remaining three subjects. Patch size was reported to be appropriate for all but one subject. Standard patch placements were used in > 99% of subjects for all but the neck (91.5%, 845/924) and left leg (83.7%, 773/924) patches. Placement of the system reference surface electrode varied, with most placed on the lower back (74.5%, 689/925), followed by the upper back (15.0%, 139/925), abdomen (7.1%, 66/925), or other location (3.4%, 31/925). Positional reference sensors needed to be manually removed or repositioned in 2.8% (26/924) of subjects.
For PI-AF subjects, Advisor™ FL Sensor Enabled™ (33.2%, 128/385), Advisor™ HD Grid 23.4%, 90/385), and Reflexion Spiral™ (18.2%, 70/385) were used most often as the primary mapping catheter. In contrast, an ablation catheter (53.9%, 174/323) or other linear catheter (34.4%, 111/323) was used most often for mapping in PI-AFL subjects. In PI-O subjects, Advisor™ HD Grid (23.9%, 11/46) and Tacticath™ (Quartz or SE, 21.7%, 10/46) were used most frequently as the primary mapping catheter.
Mapping was required in 81.5% (754/925) of subjects, with OneMap (simultaneous model and electroanatomic map creation) used in 96.3% (726/754) of subjects. AutoMap was used to create the initial map in 81.8% (315/385) of PI-AF subjects, 55.7% (180/323) of PI-AFL subjects, and 28.3% (13/46) of PI-O subjects, and a combination of AutoMap and manual mapping was also used in 10.6% (41/385), 24.1% (78/323), and 32.6% (15/46) of subjects in each cohort, respectively. Sinus rhythm (56.1%, 216/385), AFL (58.2%, 188/323), and sinus rhythm (26.1%, 12/46) were the most frequent cardiac rhythms during initial map creation in each cohort, respectively (Fig. ). Among all subjects in the analysis population, local activation time (52.0%, 392/754) and peak-to-peak voltage (46.3%, 349/754) were the predominant initial map type configurations. The low-voltage ID feature was often used in the initial maps (74.5%, 562/754). Median time to create and edit initial map was 8.6 (IQR 4.7–15.0) and 1.0 (IQR 1.0–2.0) minutes, respectively. Only 335/925 (36.2%) required editing and 66.0% (221/335) of those required editing of fewer than 10 points. Median number of mapping points collected and used was 1752.5 and 811.0, respectively. Table summarizes mapping time and point collection characteristics for the initial maps created, stratified by primary indication cohort. In addition to 754 initial maps, 579 additional maps were created for a total of 1333 maps. Median time to create and edit any map was 6.0 (IQR 3.0–12.0) and 1.0 (IQR 0.5–2.0) minutes, respectively. Editing of the map was not required or not applicable for most maps created (54.4%, 722/1327), and 30.2% (401/1327) required editing of fewer than 10 points. Median number of mapping points collected and used was 933.0 and 415.0, respectively. Average mapping efficiency for maps created with AutoMap or TurboMap was 164.9 ± 365.7 used points per minute ( n = 930 maps), which was significantly greater compared to 21.8 ± 30.3 used points per minute ( n = 374 maps) for manual mapping alone ( p < 0.001). Table summarizes mapping time and point collection characteristics for all maps created (both initial and additional), stratified by primary indication cohort. Furthermore, Supplemental Table describes the differences in points taken, mapping time, fluoroscopy time, and procedure time by each mapping catheter used stratified by PI-AF and PI-AFL. Redo AF ablation consisted of 40/430 (23.7%) of the PI-AF cohort and redo AFL ablation consisted of 40/445 (9.8%) of the PI-AFL cohort as shown in Supplemental Table . As compared to de novo procedures, time creating initial map was significantly longer and RF time was shorter in redo AF procedures. As compared to de novo AFL procedures, those undergoing redo ablations had a longer procedure time, although no difference in time creating initial map, RF time, or fluoroscopy time.
The EnSite Precision™ System was stable throughout 79.8% (738/925) of procedures. Baseline patient and procedural characteristics, including fluoroscopy and procedure times were significantly longer when the mapping system was not stable throughout the procedure (Supplemental Table ). As displayed in Table , the most common factors affecting system stability were respiratory change (43.9%, 82/187), subject movement (38.0%, 71/187), and coronary sinus (CS) positional reference dislodgement (17.1%, 32/187). There were 49 “Other, specify” responses, the most frequent being Blood Pressure or Hemodynamic change (17/49) and Unknown cause (8/49). General anesthesia was used in most cases (63.0%, 583/925) with jet ventilation utilized in 10.1% (59/583). As compared to general anesthesia, those undergoing conscious sedation were more likely to have respiratory changes (4.3% vs 15.8%, p < 0.001), patient movement (12.7% vs 4.5%, p < 0.001), and no difference in CS positional reference dislodgement (3.3% vs 5.8%, p = 0.15). Between JET ventilation and non-JET ventilation, there were no differences in respiratory changes (3.4% vs 4.4%, p = 1.0), patient movement (3.4% vs 4.6%, p = 1.0), or CS positional reference dislodgement (0% vs 3.6%, p = 0.24).
AutoMark usage data were submitted for 755 (81.6%) of subjects. The available choices for the lesion color and size metrics are different in Canada and the USA (US); therefore, these settings are summarized by country in Fig. . In Canada, Force Time Integral (FTI) and Lesion Index (LSI) were available metric choices, while they were not available in the USA at that time. In Canada, the most frequently used metrics for lesion color were LSI (54.0%, 87/161), Time (25.5%, 41/161), and Average Force (14.9%, 24/161); and the most frequently used metrics for lesion size were FTI (59.5%, 47/79), LSI (19.0%, 15/79), and Time (17.7%, 14/79). In the USA, the most frequently used metrics for lesion color were Impedance Drop (42.4%, 248/585), Time (24.4%, 143/585), Average Force (14.4%, 84/585), and Impedance Drop Percent (14.0%, 82/585); and the most frequently used metrics for lesion size were Time (36.4%, 139/382), Impedance Drop Percent (26.2%, 100/382), and Impedance Drop (18.6%, 71/382).
Table summarizes acute procedural success rates, endpoints achieved, and additional procedural characteristics. Acute success was reached based on the pre-defined endpoints for the procedure in 97.4% (901/925) of cases. Median overall procedure time (first catheter in to last catheter out) was 101.0 (IQR 59.0–152.0) minutes for all subjects, with median times within each cohort of 140.5, 59.0, and 127.0 min for PI-AFL, PI-AF, and PI-O, respectively. Among subjects with AutoMark data, an average RF power greater than 40 W was used in 10.3% (71/690) of subjects and ≥ 50 W in 17/690 (2.4%), suggesting high power short duration ablation technique may have been used in these subjects. Fluoroscopy was used for most but not all subjects (87.7%, 811/925), with the lowest proportion of fluoroscopy use in the PI-AFL cohort (83.8%, 373/445). Among procedures where fluoroscopy was used, median fluoroscopy time was 11.0 (IQR 6.0–18.0) minutes. Gaps in lesions requiring touch-up ablation were identified in 42.9% (395/921) of subjects. Median number of gaps identified was 2.0 (IQR 1.0–4.0) and AutoMark assisted in identifying the gaps in a majority of subjects with identified gaps (70.6%, 218/309).
In this real-world, multi-center study including 925 patients undergoing mapping and ablation using the EnSite Precision™ cardiac mapping system for a variety of arrythmias, we demonstrated several key observations: (1) there was high procedural stability in nearly 80% of patients; (2) the system allows for high point density collection and short mapping times with the aid of AutoMap and TurboMap; (3) maps required editing in slightly over a third of patients with two-thirds requiring editing of fewer than 10 points; and (4) acute procedural success was high for all procedures. The automated 3D mapping system, EnSite Precision™, uses a hybrid magnetic and impedance-based catheter technology to accurately locate ablation catheters and create electroanatomic maps. A high-frequency (8 kHz) signal is sent through the three pairs of surface electrodes to interact with the sensor-enabled catheters to create a voltage gradient in three axes of space. A catheter is used as reference (typically in the CS) and after analysis of the voltages and impedance gradient, the localizations of catheters are determined within the cardiac chamber. To increase accuracy to less than 1 mm, a weak magnetic field is generated by a field frame attached under the table that is employed to enhance the impedance-based tracking . Taken together, the use of the hybrid system allows for an accurate and stable mapping system, as demonstrated by stability in nearly 80% of the current study. Subject movement and respiratory changes were the most common causes of system instability, followed by CS catheter dislodgement. To ensure stability, placement of stable reference via the CS catheter should be confirmed. Sedation type was shown to influence stability by preventing patient movement and respiratory changes. Moderate sedation was more common in the unstable cohort (23.5% vs 19.6%) as compared to general anesthesia and accounted for more respiratory changes and patient movement, suggesting general anesthesia may be favored to maintain stability. Although jet ventilation has demonstrated improved stability in prior studies, our comparison was limited by small number of patients in the jet ventilation arm . Efforts to understand strategies beyond sedation type to improve stability are warranted. The Ensite Precision system is an open-platform system that permits catheters from different manufacturers to be used to generate a map. For AF, the most used mapping catheter was the Advisor FL Sensor Enabled (33.2%), followed by the Advisor HD Grid (23.4%), while a more variable selection was observed for AFL with “Other” encompassing 25.4% of the catheters use, likely mostly catheters from other manufacturers. Further advancements in mapping technology allow for collection of multiple points simultaneously to rapidly build EAMs. The AutoMap feature allows for rapid signal discrimination without the need for operator discretion allowing for nearly continuous movement of the catheter for EAM creation. The use of the AutoMap feature has been shown to result in significantly faster mapping times with higher point density than manual, point-by-point mapping . In this study reflecting real-world practice, over 80% of the maps for atrial fibrillation indication and approximately 56% of those for atrial flutter were created using the Automap feature requiring a median of 10 min and 5 min, respectively, of mapping time with high point density. In addition, only roughly a third of patients required map editing with the majority requiring editing of fewer than 10 points in a median time of 1 min, allowing for a time-efficient process to create accurate EAMs. Lastly, there was high acute procedural success across the indications, including approximately 98% for AF and AFL. There currently exist no randomized trials comparing mapping systems with respect to success rates of specific arrhythmias, although a few prior observational studies have described acute procedural success with other mapping systems in various arrhythmias. In 1,070 consecutive patients referred for RF catheter ablation for all arrhythmias, Romero et al. observed no difference in acute procedural success between CARTO (Biosense, Diamond Bar, CA, USA) (88.2%) and Ensite NavX (91.1%) . In a separate single-center study of 70 patients undergoing focal atrial tachycardia ablation comparing the acute procedural outcomes between CARTO ( n = 22) and Rhythmia (Boston Scientific) ( n = 48) mapping systems, Kellnar et al. observed significantly higher success rates in the Rhythmia cohort (89.6% vs 68.2%, p = 0.03) . Lastly, in another study comparing CARTO and Rhythmia mapping systems in 74 patients undergoing AF ablation, there was no difference in acute procedural success as PVI was achieved in all patients, although Rhythmia resulted in shorter mapping times . We expand on prior studies by describing the first systematic characterization of the Ensite Precision mapping system in a large multicenter study in patients undergoing various arrhythmias reflecting real-world practice. Although no comparison was used to adequately determine effectiveness of the mapping system in achieving acute procedural success, a few additional features are worth highlighting that may contribute to positive outcomes. For instance, unique features of the mapping system include customizable lesion color and size metrics. In Canada, the most used metrics include LSI for lesion color and FTI for lesion size, both of which rely on contact force, while the USA commonly used impedance drop percent for lesion color and time for lesion size. As there is no single parameter currently that best identifies durable lesion formation, emerging data has supported the use of contact force-sensing catheters, LSI, FTI, and impedance drop percent for determining lesion efficacy and predicting gaps . In our study, roughly 45% of patients in both AF and AFL cohorts required touch-up ablation of identified gaps, with a median of 3 and 2 gaps, respectively. Of note, those with gaps requiring touch-up do not imply failure of first-pass isolation, which was not captured in the current study. Gaps could include those identified after first-pass failure to isolate the PVs, gaps identified in PVs with demonstrated isolation after first pass that later reconnected during the procedure, or visual gaps that were further ablated by the operator irrespective of successful isolation. The automated lesion documentation tool, AutoMark, as opposed to manual marking, was used in nearly 71% of cases to identify visual gaps. While automated features increase procedural efficiency and shorten procedural times, further studies are needed to determine whether automated marking features better localize lesions as compared to manual marking and ultimately result in decreased adverse complications and improve long-term success. Limitations The present study must be interpreted in the context of several limitations inherent to its design. First, as an observational study including only one mapping system, comparisons to other systems cannot be made. Rather, these results validate the efficacy of the EnSite Precision™ mapping system. Second, the cohort consisted of ablation for various types of arrhythmias producing significant heterogeneity in some findings. We believe this accurately reflects clinical practice and allows for generalizability of our observations. Still, efforts were made to stratify according to arrhythmia type, although other unmeasured factors likely influence the outcomes studied, such as use of intracardiac ultrasound, provider experience, and need for additional ablation lesions. Third, adverse outcomes during the procedure were not recorded. Fourth, utilization of general anesthesia and jet ventilation continues to increase, and the present study may underestimate system stability in practice today. Finally, these data represent outcomes from a single mapping system that has undergone engineering and user improvements. The newer generation system, EnSite™ X EP System, will aim to broaden the range of mapping capabilities with further improvements on procedural efficiency and success.
The present study must be interpreted in the context of several limitations inherent to its design. First, as an observational study including only one mapping system, comparisons to other systems cannot be made. Rather, these results validate the efficacy of the EnSite Precision™ mapping system. Second, the cohort consisted of ablation for various types of arrhythmias producing significant heterogeneity in some findings. We believe this accurately reflects clinical practice and allows for generalizability of our observations. Still, efforts were made to stratify according to arrhythmia type, although other unmeasured factors likely influence the outcomes studied, such as use of intracardiac ultrasound, provider experience, and need for additional ablation lesions. Third, adverse outcomes during the procedure were not recorded. Fourth, utilization of general anesthesia and jet ventilation continues to increase, and the present study may underestimate system stability in practice today. Finally, these data represent outcomes from a single mapping system that has undergone engineering and user improvements. The newer generation system, EnSite™ X EP System, will aim to broaden the range of mapping capabilities with further improvements on procedural efficiency and success.
This real-world study demonstrates that use of the open-platform EnSite Precision™ mapping system results in high procedural stability, short mapping times, high point density with the use of Auto/Turbo map requiring infrequent editing, low fluoroscopy time, and high prevalence of acute procedural success. *Three (3) subjects were enrolled in the Primary Indication Atrial Fibrillation cohort without the study site confirming history of atrial fibrillation. Of these, 1 subject received ablation for atrial fibrillation and atrial flutter, 1 subject received ablation for atrial fibrillation, and 1 subject received ablation for atrial flutter.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 55 kb)
|
The impact of health risk communication on self-perceived health and worry of targeted groups: Lessons from the Swedish COVID-19 response | 4188a461-522f-4e38-87e4-731526640ffc | 11741659 | Health Communication[mh] | In March 2020, the COVID-19 pandemic spread globally, prompting policymakers worldwide to quickly implement various measures to mitigate its adverse effects. As it became apparent that older individuals were particularly vulnerable to severe COVID-19 infection , many countries introduced lockdowns and social distancing measures for this group . Sweden adopted a unique strategy for managing the COVID-19 pandemic, relying primarily on recommendations rather than mandates. However, like many other nations, Sweden’s strongest recommendations were directed at individuals aged 70 and older . Starting in March 2020, the Public Health Agency of Sweden specifically advised this age-group to minimize close contact with others as much as possible. During the following critical months, individuals aged 70 and above were advised to exercise particular caution, avoid social gatherings, and stay away from crowded places. The public was also urged to take measures to protect individuals in this age group, along with others at greater risk of severe illness. Nursing homes implemented visitor bans and introduced new hygiene protocols to safeguard older adults. In media, statistics on ICU admissions and deaths consistently emphasized the 70-and-over age group. Public authorities thus categorized everyone aged 70 and above as a single vulnerable group, often overlooking other risk factors, such as underlying health conditions which often correlate with age. The term "over 70" was widely used by health authorities and the media to describe one of the most vulnerable groups in society. The age-specific recommendations remained in place until the end of October 2020, when they were lifted—not because the risk of COVID-19 had decreased, but due to concerns that prolonged isolation had worsened mental health among the targeted groups. A report from the Public Health Agency highlighted several negative consequences, including social isolation, frustration, and the perceived stigmatization and special treatment of those classified in the high-risk group . Previous research also suggests that this narrative contributed to increased ageism and stigma , which may have exacerbated well-being challenges among older adults . In addition to identifying age as a key risk factor for severe COVID-19 outcomes, further studies have highlighted the broader negative effects of the pandemic on older individuals’ well-being . However, limited attention has been given to the potentially added health implications stemming from older adults’ self-perception as part of a high-risk group, shaped by group-based risk assessments and public health messaging. This study seeks to fill this gap by examining how age-specific communication affects individuals’ perceived health status and their concerns related to COVID-19. We hypothesize that continuous reminders of one’s high-risk status may negatively influence perceived health and heighten concerns about the virus. Specifically, we investigate whether being categorized as part of a "high-risk group" lowered individuals’ perceptions of their general health and amplified their concerns about COVID-19, focusing on those who recently entered the high-risk category (age 70–71), compared to individuals just 1–2 years younger. We examine perceived health in this age span both in 2019 and 2020. Our identification strategy relies on the assumption that individuals just below age 70 are similar to those just above, except for their risk-group classification. Any observed differences in perceived health between these groups in 2020, but not in 2019, are thus likely due to differing pandemic recommendations and risk-group assessments—along with the associated factors of isolation, stigma, and ageism. While factual dissemination is crucial during a pandemic, the way information is communicated may also play a vital role. Effective health risk communication depends on various factors, including cultural and social contexts, and attitudes toward public health interventions . While research often suggests that effective disease communication should rely on the transmission of facts through proper channels, it is also important to note that even factual messages can be perceived and interpreted differently by different groups of individuals . Factual dissemination through government messaging is a key tool for influencing public views and behaviors during crises and well-considered government communication can help reduce harm . However, framing a specific group, such as older individuals, as “old and vulnerable” may contribute to negative social identities, increase ageism, and elevate fear and anxiety . Furthermore, focusing solely on the risks to older adults may fail to significantly improve attitudes or behaviors in the broader population and could even be counterproductive, as younger individuals might perceive themselves as completely safe. In contrast, when younger adults receive information about the risks posed to their age group, they are more likely to see the disease as a threat and adhere to public health recommendations . Relatedly, several studies have explored the role of age in adherence to COVID-19 recommendations during the early stages of the pandemic. One study found that while older adults were less likely to use public transportation or attend social gatherings, they were not consistently more willing to self-isolate or wear face masks . In contrast, other research shows a positive correlation between age and overall compliance with public health measures. In France, older individuals were more likely to follow guidelines, possibly due to heightened vulnerability . In the U.S., older adults were significantly more likely to perceive the pandemic as a "significant crisis" and a "threat to health" compared to younger Americans . This heightened perception of risk may explain why subsequent research in Italy and the U.S. found that older adults were especially likely to adhere to health measures and reduce social interactions . Nudges in the form of reminders often play a role in health communication. One study concluded that dentist check-up reminders more than doubled the percentage of patients who made an appointment . However, another study found that while nudges during the COVID-19 pandemic may have influenced intentions, they did not always translate into actions. Only individuals with poorer health status stayed home more after receiving a reminder, whereas those in good health did not significantly change their behavior . Overall, it remains unclear whether age-specific recommendations and reminders prompted individuals classified as high-risk to behave differently than those just 1–2 years younger and not yet in the high-risk category. However, the findings from this study suggest that such age-specific communication led 70-year-olds in 2020 to perceive their general health as significantly worse compared to their slightly younger peers, a distinction that did not exist prior to the pandemic. Additionally, 70-year-olds expressed significantly greater concern about the virus than those aged 1–2 years younger. These findings highlight the potential unintended consequences of age-specific communication strategies during the COVID-19 pandemic. While this study does not evaluate the effectiveness of Sweden’s pandemic communication strategy in protecting the elderly from infection or mortality (as it does not examine the impact of age-specific communication on these outcomes), our findings emphasize the importance of considering the broader, potentially unintended consequences of group-specific recommendations. By exploring the effects of age-targeted communication on perceived health status, this study contributes to the ongoing discussion on the development of effective risk communication strategies during public health crises. In this study, we utilize survey data from the "National SOM" survey for the years 2019 (before the pandemic) and 2020 (during the pandemic) . The SOM Institute, or the Institute for Opinion Surveys and Media Analysis, is a research institute based at the University of Gothenburg in Sweden, and its yearly National SOM survey has been conducted since 1986. The survey aims to provide a comprehensive understanding of Swedish society by collecting data on a wide range of topics, including social issues and values. The survey involves a large, random, and representative sample of the Swedish population aged 16–85 years. Questionnaires are sent out in September each year and the fieldwork is completed 3–4 months later (December/January). The survey response rate was 49 percent, and 51 percent in 2019 and 2020 respectively. The SOM Institute adheres to rigorous methodological standards in survey design, sampling, and data analysis to ensure quality. Yearly methodology reports compare the sample with the overall Swedish population for representativeness of the data. According to the reports, foreign-born individuals, younger individuals, and men (especially younger men) are somewhat less likely to respond to the survey than older individuals, Swedish born individuals, and women . To assess the direct effects of age-specific communication on self-perceived health and concern for the COVID-19 virus, we use a restricted sample from the survey that includes only individuals aged 68–71. In this age group, the response rate is even higher, and the skewness based on age and gender is negligible. In 2020, the sample of 68-71-year-olds consists of 726 individuals (Mean = 69.478, Sd = 1.094). In 2019, the sample of 68-71-year-olds totals 684 individuals (Mean = 69.575, Sd = 1.137). To address our research question regarding the distinct effects of targeted information aimed at a specific age group on perceived health, we analyze individuals’ self-reported health both before and during the pandemic. We operate under the assumption that individuals aged 68–69 and 70–71, on average, are similar in most aspects except for their risk-group classification. Consequently, any disparities in health and COVID-19 related concerns observed between individuals just below and just above 70 in 2020, but not in 2019, can likely be attributed to their risk group belonging. These disparities may be influenced by varying media portrayals, communicated recommendations, worsening ageism, or perceived vulnerability. We utilize the following survey questions as dependent variables: How would you rate your general health ? with 10 response options ranging from 1 = Very bad to 10 = Very good (2019: Mean = 7.633, Sd = 1.978; 2020: Mean = 7.494, Sd = 2.124), and How worried are you about the coronavirus and its consequences for : a) yourself , b) your close relatives and friends , c) the Swedish society , with the four response options; 1. Not at all worried, 2. Not very worried, 3. Quite worried, and 4. Very worried. Our main variable of interest is the age of the respondent, i.e., whether the respondent is just below 70, or 70 and above, as we examine potential differences between 69- and 70-year-olds (and 68-69- and 70-71-year-olds) in their self-reported health perception before and during the pandemic. We also have access to information about the individual and/or household characteristics. We add a set of control variables in our analysis, in accordance with previous research . These control variables include household income, higher education, gender, type of housing, marital status, place of residence, and month of response. We control for the urban-rural categorization of the place of residence, which includes rural area, smaller agglomeration, city or larger agglomeration, and the three largest cities/metropolitan areas of Stockholm, Gothenburg, and Malmo. We also control for the month of response, which ranges from September to December/January. The summary statistics for the dependent and independent variables are presented in S1 Table in . Various methodological approaches can be used to analyze subjective health and coronavirus concerns. In this study we use quantitative methods. This is a common approach in previous research . We first compute descriptive statistics and compare age groups using t-tests for the variables of interest, both for 2019 and 2020. We then run regression analyses where we include a set of control variables in our analyses to address potential confounding factors and increase the robustness of our findings. Since the survey samples include different individuals in 2019 and 2020, we conduct cross-sectional analyses. Given that the dependent variables are of a categorical and ordinal nature, we utilize ordered logit regressions to estimate the relationships of interest. This approach enables us to utilize all response categories of the dependent variables. The ordered logit model uses a Maximum Likelihood method to estimate the probability that an individual chooses a higher health or concern response option as a function of the independent variables. It estimates the likelihood that an individual will cross a threshold. The ordered logit model does not require the dependent variable to be continuous, normally distributed, or to have a linear relationship with the independent variables . All statistical analyses were conducted using Stata 18. The ordered logit regressions were estimated using the ologit command, with odds ratios presented to aid interpretation–thus values greater than 1 indicate a positive relationship, and values less than 1 indicate a negative relationship. Tables and present the results of t-tests comparing the mean values of individuals aged 69 and 70, as well as 68–69 and 70–71, for the variables "Subjective health status" and "Coronavirus concerns" regarding oneself, family and friends, and society. focuses on the comparison between 69- and 70-year-olds, while broadens the comparison to include 68-69- and 70-71-year-olds. Naturally, data on coronavirus-related concerns are only available for 2020 due to the pandemic context. In 2020, the difference in subjective health between individuals aged 69 and 70 (as well as 68–69 and 70–71) is statistically significant. Specifically, those aged 70(-71) perceived their health status as significantly worse than those under 70—a distinction that was not present in 2019, before the pandemic. A similar pattern is observed for "Concern about the coronavirus and its consequences for oneself." Individuals aged 70(-71) expressed significantly greater concern compared to their slightly younger counterparts. While concerns for family and friends also differed significantly between the groups, the significance level was lower. No significant differences were found between the groups regarding concerns for society as a whole. Thus, the descriptive statistics in Tables and reveal significant differences in self-reported health status and concern for the coronavirus between individuals aged 69 and 70 (as well as 68–69 and 70–71) in 2020. Specifically, those aged 70(-71) reported significantly worse health status and higher levels of concern about the virus compared to those aged 69(-68). These findings suggest that communication targeted at this age group may have influenced their health perceptions and concerns regarding the virus. presents the results from the ordered logit estimations. We conducted separate regressions for the years 2019 and 2020 to assess potential changes in subjective health before and during the pandemic, controlling for other factors. The first two columns (1 and 2) illustrate differences between individuals aged 69 and 70, while the last two columns (3 and 4) compare the groups aged 68–69 and 70–71. In 2019, no significant differences were observed in subjective health status between individuals aged 69 and 70 (column 1) or between those aged 68–69 and 70–71 (column 3). The coefficients for both comparisons were close to 1 (0.994 and 0.972, respectively) and not statistically significant. However, in 2020, the older age group, identified by the Swedish Public Health Agency as particularly vulnerable, reported significantly lower subjective health compared to those a year younger who were not specifically targeted by health authorities. The coefficients, 0.732 and 0.724, indicate a notably lower perceived health status in the older age group (columns 2 and 4). To explore whether there were also significant differences in concerns about the coronavirus between the two age groups, we conducted an ordered logit regression using concerns about the coronavirus (for oneself, family and friends, and society) as the dependent variable. Since this variable is only available for 2020, we limited our analysis to data from that year. The results, expressed as odds ratios, are presented in Tables and for the two age group comparisons (69 vs. 70 and 68–69 vs. 70–71). The complete S2 and S3 Tables are available in the . The results indicate that being 70 years old, compared to 69, significantly increased the likelihood of experiencing COVID-related worry and anxiety. This suggests that age-specific messaging about COVID-19 vulnerability may have contributed to heightened levels of concern among 70-year-olds compared to their 69-year-old counterparts. It appears that individuals aged 69 perceived themselves as less vulnerable, possibly because they had not yet reached the age group designated as high risk. As a sensitivity check, we ran all regressions using both logit and OLS models, which yielded similar results. Effective health risk communication depends not only on disseminating accurate information but also on how different groups perceive the message. It also relies on language preferences and attitudes towards public health interventions . While the transmission of facts is important, research suggests that the message’s perception by different groups can be colored by their beliefs about their level of risk . It is therefore essential to pay attention not only to how the message is disseminated, but how it is perceived by different groups. The Swedish COVID-19 strategy based on age-specific recommendations, informed by data on infection rates and mortality, consistently highlighted old age as the most significant risk factor for fatal outcomes . The aim of this overall communication strategy was clearly to educate and warn the public about age-related variations in severe COVID-19 outcomes in an effort to influence individual behaviour and protect the most vulnerable demographic–the older adults. Targeted communication and recommendations aimed at specific risk groups can mitigate physical harm and protect those most at risk . However, the process of defining such groups can be complex and may have unintended implications. While it is relatively straightforward to categorize risk based on binary criteria, such as the presence of specific pre-existing conditions (e.g., diabetes), delineating risk based on age is more challenging. The risk of severe illness increases gradually with age, particularly from around 60, 65, or 70 years . The difference in risk between individuals aged 69 and 70 is minimal, much like the difference between individuals one year apart within the same decade (e.g., 68 versus 69-year-olds or 70 versus 71-year-olds). Moreover, older individuals represent a highly heterogeneous group with substantial variability in underlying health conditions, health histories, life experiences, genetics, lifestyles, and overall aging processes . Our study emphasizes that the impact of risk group communication depends on the careful design of messages and can influence how different groups perceive and respond to risks. While the strategy successfully targeted the most vulnerable group, it also inadvertently excluded highly similar risk groups, such as those just one or two years younger. If the definition of a risk group results in individuals near the threshold of the classification perceiving themselves and their health vastly differently–despite negligible actual differences in risk–the intended objective of employing risk group communication can inadvertently backfire. Our findings reveal the need to consider affected and excluded groups when crafting health risk communication strategies. The way risk groups are defined and communicated can influence self-perception, health concerns, and levels of anxiety, irrespective of actual risk. Alongside the adverse effects that may result from stricter recommendations, ageism and stigma may also act as compounding factors to the adverse effects of being categorized as belonging to a high-risk group [cf. 8–13]. Consequently, the use of risk group communication strategies should be approached with caution, with consideration given to the multifaceted implications it may have on individual perceptions and societal dynamics. This study thereby underscores the critical role of carefully designed risk communication in preventing unnecessary negative consequences, with implications for individual and societal health beyond the context of the pandemic. During the early stages of the Covid-19 pandemic, the Swedish Public Health Agency focused its recommendations on individuals aged 70 years and above, who were deemed to be at the highest risk of severe illness from the virus. Starting from March 16, recommendations were issued for this age group, urging them to minimize social interactions, with further age-specific directives issued in the subsequent critical months. These recommendations were in place until October 22, when they were abandoned due to the recognition of adverse consequences such as isolation, lack of social context, and frustration. While prevailing research has underscored age as a prominent risk factor for severe COVID-19 outcomes, less consideration has been given to the potential impact of being classified into a high-risk group on individuals’ overall health perceptions. Our study addresses this gap by examining disparities in perceived health status and virus-related concerns among individuals aged 69–70 (and 68–71) in Sweden. Drawing on data from 2019 (before the pandemic) and 2020 (during the pandemic), our results indicate a notable divergence, with 70-year-olds reporting a lower perceived health status compared to their 69-year-old counterparts in 2020, but not in 2019. Furthermore, 70-year-olds also expressed higher COVID-19-related concern than 69-year-olds in 2020. This discrepancy suggests that the Swedish COVID-19 strategy, tailored to safeguard those aged 70 and above, may have influenced perceptions of health within this demographic. Our results suggest that public health strategies, while well-intended, can have unintended consequences. Tailored health communication strategies therefore need to be carefully developed to avoid such negative consequences. Risk group communication based on age may inadvertently exclude similar high-risk individuals, such as those close to the classification threshold. It may also have unintended negative effects on the targeted individuals’ health perceptions, making individuals perceive their overall health as worse than before or worse than it is. Our study also calls for future research to examine whether these disparities persist long after the immediate crisis of the pandemic or diminish over time and revert to pre-pandemic levels relatively quickly. S1 File (PDF) |
Laparoscopic inguinal hernia repair with self-fixated meshes: a randomized controlled trial | 0ef92e94-0397-48cd-a2f6-f6810e273a35 | 11933136 | Surgical Procedures, Operative[mh] | Trial design This study was a prospective single-blind randomized clinical trial conducted in two surgical units in Finland, the Helsinki University Hospital and Päijät-Häme Central Hospital, from April 2021 through June 2024. There were two arms in the study. Patients were randomized at an allocation ratio of 1:1 to receive glue-coated self-adhesive mesh (Mesh Adhesix™, 10 × 15 cm, Cousin Biotech) or self-gripping mesh (ProGrip™ Laparoscopic Self-fixating Mesh, 10 × 15 cm, Flat Sheet, Covidien). This trial was approved by the ethics committee of Helsinki University Hospital (HUS/3413/2020) and registered in ClinicalTrials.gov (NCT05091853). This report adheres to the CONCORT 2010 guidelines . Participants Enrolled participants were adult (≥ 18 years) patients with symptomatic inguinal hernia confirmed by clinical examination and suitable for operation in day surgery with laparoscopic technique (TAPP or TEP). Unilateral and bilateral operations were included for both primary and recurrent hernias. Exclusion criteria included scrotal or incarcerated hernia, femoral hernia without an inguinal hernia finding, previous laparotomy, and no clinically palpable hernia. High-risk patients not suitable for day surgery, such as American Society of Anesthesiologists (ASA) physical status classification IV or more, body mass index > 35 kg/m 2 , liver cirrhosis, or other general illness causing contraindication for day surgery were not included. As the surgeon needed to be familiar with both meshes, patients were not enrolled in the study in case of teaching surgery where a surgical trainee performed the operation under supervision of a consultant. Other reasons for exclusion were inadequate language skills of the patient or if the patient declined to participate. Excluded patients were recorded. Written informed consent was obtained from all participants. Procedures The operations were performed under general anesthesia. The surgical technique (TAPP or TEP) was chosen by the surgeon. In case of TAPP, the abdominal cavity was entered with three trocars (one 12 mm and two 5 mm). A preperitoneal flap was created to achieve adequate exposure of the myopectineal orifice (MPO) and to facilitate correct mesh placement. Finally, the peritoneal flap was sutured. In the TEP, a small incision was made laterocaudally near the umbilicus and the preperitoneal space was initially created by balloon dilatation. After initial dilatation with round shaped balloon (OMSPDB1000, Medtronic, New Haven, CT), a 10-mm Hasson trocar was inserted into the preperitoneal space for telescope and further two 5-mm trocars were inserted as working ports. Dissection of preperitoneal space was continued with 5-mm graspers to properly expose the inguinal area for correct mesh placement as described above. The mesh was tightly rolled (Adhesix™) or folded (Progrip™) before brought in through the 12 mm trocar to the hernia site. The Adhesix™ mesh unfolds when letting go of the mesh and the thin fabric covering the mesh was removed, while the Progrip™ mesh was unfolded manually. The surgeons performing the operations were general surgeons with 5 to 30 years of experience in laparoscopic hernia surgery. Helsinki University Hospital has a volume of 450 laparoscopic inguinal hernia surgeries annually. In Päijät-Häme Central Hospital annual volume of laparoscopic hernia operations is 100. Follow up was scheduled at 1 month, 3 months, and 12 months after surgery. At each timepoint, participants were sent a questionnaire specifically designed for the study. Questionnaires were collected by the investigating surgeons. All participant medical records were also reviewed. In case of noteworthy clinical problems, such as chronic pain, the participants were examined in the outpatient clinic. Primary and secondary outcomes The primary outcome was the number of analgesics used during the first week after surgery. Paracetamol and Ibuprofen were routinely prescribed after surgery, opioid pain medication was used only if pain relief was insufficient otherwise. One tablet is equivalent to 600 mg ibuprofen or 1 g of paracetamol. Secondary outcomes were post-operative pain intensity, timing for ability to return to work after surgery, complications, and recurrence rate. Sample size Sample size estimation was based on results of earlier trials and our internal evaluation . According to our pilot evaluation, the mean consumption of analgesics during the first post-operative week was 16 tablets in a laparoscopic inguinal hernia operation (TEP) without mesh fixation. It was assumed that analgesic consumption after hernia operation with self-adhesive mesh is comparable with operation without mesh fixation. In our previous study consisting of openly operated patients, the consumption of analgesics was 26% more in the self-gripping mesh group. The hypothesis of the study was that use of self-gripping mesh causes more post-operative pain also after laparoscopic operations; it was estimated that using 6 tablets more during the first post-operative week would be a clinically significantly different finding. Assuming 80% power and an alpha level of 0.05, 148 participants would be required for the study. Considering an estimated dropout rate of 10%, it was estimated that 164 patients needed to be enrolled. An interim analysis was conducted halfway, with no significant differences between the groups. Randomization Patients were enrolled consecutively and randomly allocated to receive either of the meshes used in the study. TAPP or TEP was chosen according to surgeon preference and separate randomization lists were created for both techniques and centers. Randomization was also stratified by hernia type (unilateral primary, bilateral, or recurrent hernia). A computer-based randomization list was generated with blocks of 10. Numbered and sealed opaque envelopes were opened by the operating surgeon just before the procedure. The participants were blinded to the choice of mesh. Statistical methods Extracted data were analyzed using IBM SPSS statistics version 28. Numeric variables were tested for normality of distributions with the Kolmogorov–Smirnov test and are described as mean and standard deviation. Comparison of data between the groups was performed with independent samples t test or Mann–Whitney U test, and Wilcoxon signed-rank test for related samples. Categorical data are described as numbers and percentages. The χ 2 test or Fisher’s exact test was used for comparisons of categorical data. A two-tailed value of p < 0.05 was considered as statistically significant.
This study was a prospective single-blind randomized clinical trial conducted in two surgical units in Finland, the Helsinki University Hospital and Päijät-Häme Central Hospital, from April 2021 through June 2024. There were two arms in the study. Patients were randomized at an allocation ratio of 1:1 to receive glue-coated self-adhesive mesh (Mesh Adhesix™, 10 × 15 cm, Cousin Biotech) or self-gripping mesh (ProGrip™ Laparoscopic Self-fixating Mesh, 10 × 15 cm, Flat Sheet, Covidien). This trial was approved by the ethics committee of Helsinki University Hospital (HUS/3413/2020) and registered in ClinicalTrials.gov (NCT05091853). This report adheres to the CONCORT 2010 guidelines .
Enrolled participants were adult (≥ 18 years) patients with symptomatic inguinal hernia confirmed by clinical examination and suitable for operation in day surgery with laparoscopic technique (TAPP or TEP). Unilateral and bilateral operations were included for both primary and recurrent hernias. Exclusion criteria included scrotal or incarcerated hernia, femoral hernia without an inguinal hernia finding, previous laparotomy, and no clinically palpable hernia. High-risk patients not suitable for day surgery, such as American Society of Anesthesiologists (ASA) physical status classification IV or more, body mass index > 35 kg/m 2 , liver cirrhosis, or other general illness causing contraindication for day surgery were not included. As the surgeon needed to be familiar with both meshes, patients were not enrolled in the study in case of teaching surgery where a surgical trainee performed the operation under supervision of a consultant. Other reasons for exclusion were inadequate language skills of the patient or if the patient declined to participate. Excluded patients were recorded. Written informed consent was obtained from all participants.
The operations were performed under general anesthesia. The surgical technique (TAPP or TEP) was chosen by the surgeon. In case of TAPP, the abdominal cavity was entered with three trocars (one 12 mm and two 5 mm). A preperitoneal flap was created to achieve adequate exposure of the myopectineal orifice (MPO) and to facilitate correct mesh placement. Finally, the peritoneal flap was sutured. In the TEP, a small incision was made laterocaudally near the umbilicus and the preperitoneal space was initially created by balloon dilatation. After initial dilatation with round shaped balloon (OMSPDB1000, Medtronic, New Haven, CT), a 10-mm Hasson trocar was inserted into the preperitoneal space for telescope and further two 5-mm trocars were inserted as working ports. Dissection of preperitoneal space was continued with 5-mm graspers to properly expose the inguinal area for correct mesh placement as described above. The mesh was tightly rolled (Adhesix™) or folded (Progrip™) before brought in through the 12 mm trocar to the hernia site. The Adhesix™ mesh unfolds when letting go of the mesh and the thin fabric covering the mesh was removed, while the Progrip™ mesh was unfolded manually. The surgeons performing the operations were general surgeons with 5 to 30 years of experience in laparoscopic hernia surgery. Helsinki University Hospital has a volume of 450 laparoscopic inguinal hernia surgeries annually. In Päijät-Häme Central Hospital annual volume of laparoscopic hernia operations is 100. Follow up was scheduled at 1 month, 3 months, and 12 months after surgery. At each timepoint, participants were sent a questionnaire specifically designed for the study. Questionnaires were collected by the investigating surgeons. All participant medical records were also reviewed. In case of noteworthy clinical problems, such as chronic pain, the participants were examined in the outpatient clinic.
The primary outcome was the number of analgesics used during the first week after surgery. Paracetamol and Ibuprofen were routinely prescribed after surgery, opioid pain medication was used only if pain relief was insufficient otherwise. One tablet is equivalent to 600 mg ibuprofen or 1 g of paracetamol. Secondary outcomes were post-operative pain intensity, timing for ability to return to work after surgery, complications, and recurrence rate.
Sample size estimation was based on results of earlier trials and our internal evaluation . According to our pilot evaluation, the mean consumption of analgesics during the first post-operative week was 16 tablets in a laparoscopic inguinal hernia operation (TEP) without mesh fixation. It was assumed that analgesic consumption after hernia operation with self-adhesive mesh is comparable with operation without mesh fixation. In our previous study consisting of openly operated patients, the consumption of analgesics was 26% more in the self-gripping mesh group. The hypothesis of the study was that use of self-gripping mesh causes more post-operative pain also after laparoscopic operations; it was estimated that using 6 tablets more during the first post-operative week would be a clinically significantly different finding. Assuming 80% power and an alpha level of 0.05, 148 participants would be required for the study. Considering an estimated dropout rate of 10%, it was estimated that 164 patients needed to be enrolled. An interim analysis was conducted halfway, with no significant differences between the groups.
Patients were enrolled consecutively and randomly allocated to receive either of the meshes used in the study. TAPP or TEP was chosen according to surgeon preference and separate randomization lists were created for both techniques and centers. Randomization was also stratified by hernia type (unilateral primary, bilateral, or recurrent hernia). A computer-based randomization list was generated with blocks of 10. Numbered and sealed opaque envelopes were opened by the operating surgeon just before the procedure. The participants were blinded to the choice of mesh.
Extracted data were analyzed using IBM SPSS statistics version 28. Numeric variables were tested for normality of distributions with the Kolmogorov–Smirnov test and are described as mean and standard deviation. Comparison of data between the groups was performed with independent samples t test or Mann–Whitney U test, and Wilcoxon signed-rank test for related samples. Categorical data are described as numbers and percentages. The χ 2 test or Fisher’s exact test was used for comparisons of categorical data. A two-tailed value of p < 0.05 was considered as statistically significant.
Between April 2021 and June 2023, a total of 174 patients consented to participate in the study and were operated on. Ninety (51.7%) patients were randomly assigned to receive Adhesix™ mesh (group A); 84 (48.3%) received Progrip™ mesh (group P). A total of 156 (90%) participants completed follow up. Seven (7.8%) participants in group A and 11 (13%) in group P did not participate in the follow up. A flow diagram of the study is shown in Fig. . Patient characteristics Participant demographics and characteristics are presented in Table . Mean age of the participants was 56 years (SD 14.1). Mean BMI was 24.6 kg/m 2 (SD 3.24). Twelve patients (7%) had a BMI over 30. Proportionally more n = 5 (10.9%) of the patients with recurrent hernia were obese with a BMI over 30. Most (96%) participants were generally healthy (ASA score 1–2). These characteristics were comparable between groups. Group P included more females than group A (30 [35.7%] vs.18 [20%], p = 0.02). One quarter of the overall study population was female. Operating details Surgery in group P lasted longer. Mean operation time was 57 min (range 21–130 min) with Adhesix™ mesh and 65 min (range 25–142 min) with Progrip™ ( p = 0.031). A total of 135 (77.6%) TAPP and 39 (22.4%) TEP operations were performed; 128 (73.6%) participants were operated for primary hernias, of which 68 were unilateral and 60 bilateral. Forty-six (26.4%) participants had surgery due to recurrent hernias. There were no significant differences in the distribution of hernia type between the groups. One operation with the TEP technique was converted to TAPP and this participant was excluded. No other peri-operative complications were noted. Operative details are presented in Table . Primary outcome Primary outcome was use of analgesics during the first week after surgery. A total of 4.8% of the participants needed additional pain medication (tramadol or combination of paracetamol and codeine). Participants in group A used 21.2 tablets (SD 12.7) during the first week, whereas participants in group P used 22.9 (SD 12.8) tablets ( p = 0.461). During the first post-operative day, group P used more analgesics than group A (4.8 vs 4.1, p = 0.027), after which no clear difference between groups was observed. Daily use of analgesics during the first week is presented in Fig. . As the used operative technique could be an effect modifier, we performed a sensitivity analysis for the main outcome and found no statistically significant differences between techniques. Subgroup analysis for operative technique and hernia type are found in Supplementary File 2. Secondary outcomes There were no statistically significant differences between groups in mean use of regular analgesics, mean occasional use of analgesics, and time to return to work or normal daily activities (Table ) . Mean time for use of regular pain medication was 10.8 days (SD 10.6) after surgery. Mean time for use of analgesics, when necessary, was 15.9 days (SD 16.9). Before surgery, approximately one third of patients needed analgesics to relieve inguinal pain, whereas 17% ( n = 20) and 12% ( n = 16) used analgesics occasionally at 1 and 3 months after surgery, respectively. Participants returned to normal activities after mean 16.1 days (SD 10.8) and were fit for work after mean 16.6 days (SD 9.6). A total of 28% ( n = 37) of the study population reported that they had not yet returned to all normal activities at 3 months after surgery. Pain intensity was measured using a numeric rating scale (NRS). Before surgery, mean NRS was 1.5 at rest, 2.5 when coughing, and 4.5 during exercise. At the first 2 days after surgery, NRS values reported by group P were higher than group A (Day 1: at rest 4.06 vs 2.91, p = 0.015; coughing 5.73 vs 4.79, p = 0.05; during exercise 5.82 vs 4.74, p = 0.048. Day 2: at rest 3.59 vs 2.52, p = 0.010; coughing 5.53 vs 4.32, p = 0.011; during exercise 5.48 vs 4.37, p = 0.021). After 2 days, both groups had comparable pain scores and had rapid pain relief. Figure shows NRS values during exercise daily during the first week and at later timepoints until 1 year after surgery. At 3 months and 1 year after surgery, the proportions of participants with pain at rest were unchanged, with around 85% of participants experiencing no pain at rest, 11% had mild pain, and approximately 3% intermediate pain. One participant (0.8%), which was in group P, reported severe pain at rest at 12 months after surgery. During exercise, group P reported more moderate or severe pain ( n = 10, 15.4%) compared with group A ( n = 2, 3.1%) ( p = 0.035). In contrast, at 1 year 3 (5.6%) participants in group P and 8 (11.8%) participants in group A reported NRS value > 3 ( p = 0.509). To assess quality of life, the RAND-36 Item Health Survey was completed by the participants before surgery and at 3 months and 1 year after surgery. Significant improvement was noted in all physical aspects (pain, physical functioning, and limitations due to physical problems) at 3 months after surgery compared with preoperative state. Additionally, social functioning was improved at 3 months after surgery. After 3 months, no additional significant improvement was observed in any aspect of the survey. Both groups had corresponding results (Fig. ). Complications Bruising in the inguinal area ( n = 52, 43%) and the scrotal area (n = 41, 35.7%) were common. No participant needed reoperation or other intervention due to haematoma or bleeding. A total of 16% ( n = 19) of participants reported seromas in the operated area, none of which needed treatment. No wound infections were reported needing antibiotic treatment. Only one reoperation was performed during the early recovery period due to seroma that was mistakenly thought to be a recurrent hernia. Two (one in each group) hernia recurrences occurred during the 1-year follow up, both later treated with open Lichtenstein mesh repair. In case of normal recovery after surgery, participants had no clinical follow up at the outpatient clinic. Twenty-two (18.2%) participants needed a physician’s evaluation during the first month after surgery due to prolonged pain, sensory disturbances in the operated area or thigh, or swelling in the groin or scrotum. Some patients needed more effective analgesics (4.8% needed opioid analgesics, such as tramadol or combination of paracetamol and codeine) or prolonged absence from work, but no surgical interventions were required due to these problems. During the first year after surgery, one fifth ( n = 34, 19.5%) of the operated participants needed an evaluation due to post-operative concerns (acute or chronic pain, 3 cases of hydrocele, 2 recurrences), with no significant difference between groups.
Participant demographics and characteristics are presented in Table . Mean age of the participants was 56 years (SD 14.1). Mean BMI was 24.6 kg/m 2 (SD 3.24). Twelve patients (7%) had a BMI over 30. Proportionally more n = 5 (10.9%) of the patients with recurrent hernia were obese with a BMI over 30. Most (96%) participants were generally healthy (ASA score 1–2). These characteristics were comparable between groups. Group P included more females than group A (30 [35.7%] vs.18 [20%], p = 0.02). One quarter of the overall study population was female.
Surgery in group P lasted longer. Mean operation time was 57 min (range 21–130 min) with Adhesix™ mesh and 65 min (range 25–142 min) with Progrip™ ( p = 0.031). A total of 135 (77.6%) TAPP and 39 (22.4%) TEP operations were performed; 128 (73.6%) participants were operated for primary hernias, of which 68 were unilateral and 60 bilateral. Forty-six (26.4%) participants had surgery due to recurrent hernias. There were no significant differences in the distribution of hernia type between the groups. One operation with the TEP technique was converted to TAPP and this participant was excluded. No other peri-operative complications were noted. Operative details are presented in Table .
Primary outcome was use of analgesics during the first week after surgery. A total of 4.8% of the participants needed additional pain medication (tramadol or combination of paracetamol and codeine). Participants in group A used 21.2 tablets (SD 12.7) during the first week, whereas participants in group P used 22.9 (SD 12.8) tablets ( p = 0.461). During the first post-operative day, group P used more analgesics than group A (4.8 vs 4.1, p = 0.027), after which no clear difference between groups was observed. Daily use of analgesics during the first week is presented in Fig. . As the used operative technique could be an effect modifier, we performed a sensitivity analysis for the main outcome and found no statistically significant differences between techniques. Subgroup analysis for operative technique and hernia type are found in Supplementary File 2.
There were no statistically significant differences between groups in mean use of regular analgesics, mean occasional use of analgesics, and time to return to work or normal daily activities (Table ) . Mean time for use of regular pain medication was 10.8 days (SD 10.6) after surgery. Mean time for use of analgesics, when necessary, was 15.9 days (SD 16.9). Before surgery, approximately one third of patients needed analgesics to relieve inguinal pain, whereas 17% ( n = 20) and 12% ( n = 16) used analgesics occasionally at 1 and 3 months after surgery, respectively. Participants returned to normal activities after mean 16.1 days (SD 10.8) and were fit for work after mean 16.6 days (SD 9.6). A total of 28% ( n = 37) of the study population reported that they had not yet returned to all normal activities at 3 months after surgery. Pain intensity was measured using a numeric rating scale (NRS). Before surgery, mean NRS was 1.5 at rest, 2.5 when coughing, and 4.5 during exercise. At the first 2 days after surgery, NRS values reported by group P were higher than group A (Day 1: at rest 4.06 vs 2.91, p = 0.015; coughing 5.73 vs 4.79, p = 0.05; during exercise 5.82 vs 4.74, p = 0.048. Day 2: at rest 3.59 vs 2.52, p = 0.010; coughing 5.53 vs 4.32, p = 0.011; during exercise 5.48 vs 4.37, p = 0.021). After 2 days, both groups had comparable pain scores and had rapid pain relief. Figure shows NRS values during exercise daily during the first week and at later timepoints until 1 year after surgery. At 3 months and 1 year after surgery, the proportions of participants with pain at rest were unchanged, with around 85% of participants experiencing no pain at rest, 11% had mild pain, and approximately 3% intermediate pain. One participant (0.8%), which was in group P, reported severe pain at rest at 12 months after surgery. During exercise, group P reported more moderate or severe pain ( n = 10, 15.4%) compared with group A ( n = 2, 3.1%) ( p = 0.035). In contrast, at 1 year 3 (5.6%) participants in group P and 8 (11.8%) participants in group A reported NRS value > 3 ( p = 0.509). To assess quality of life, the RAND-36 Item Health Survey was completed by the participants before surgery and at 3 months and 1 year after surgery. Significant improvement was noted in all physical aspects (pain, physical functioning, and limitations due to physical problems) at 3 months after surgery compared with preoperative state. Additionally, social functioning was improved at 3 months after surgery. After 3 months, no additional significant improvement was observed in any aspect of the survey. Both groups had corresponding results (Fig. ).
Bruising in the inguinal area ( n = 52, 43%) and the scrotal area (n = 41, 35.7%) were common. No participant needed reoperation or other intervention due to haematoma or bleeding. A total of 16% ( n = 19) of participants reported seromas in the operated area, none of which needed treatment. No wound infections were reported needing antibiotic treatment. Only one reoperation was performed during the early recovery period due to seroma that was mistakenly thought to be a recurrent hernia. Two (one in each group) hernia recurrences occurred during the 1-year follow up, both later treated with open Lichtenstein mesh repair. In case of normal recovery after surgery, participants had no clinical follow up at the outpatient clinic. Twenty-two (18.2%) participants needed a physician’s evaluation during the first month after surgery due to prolonged pain, sensory disturbances in the operated area or thigh, or swelling in the groin or scrotum. Some patients needed more effective analgesics (4.8% needed opioid analgesics, such as tramadol or combination of paracetamol and codeine) or prolonged absence from work, but no surgical interventions were required due to these problems. During the first year after surgery, one fifth ( n = 34, 19.5%) of the operated participants needed an evaluation due to post-operative concerns (acute or chronic pain, 3 cases of hydrocele, 2 recurrences), with no significant difference between groups.
This is the first randomized controlled trial comparing the Progrip™ and the Adhesix™ self-fixating meshes in laparoscopic groin hernia surgery. In contrast to our previous findings in open inguinal hernia procedures, the current study did not reveal differences in many aspects of post-operative pain, such as analgesics used during the first week after surgery, the need for unplanned visits due to pain, and returning to normal activities and work. As expected after laparoscopic surgery, the average pain relief was rapid and quality of life was increased. To our knowledge there are no previous trials comparing self-gripping and self-adhesive glue-coated meshes. However, in two randomized studies the use of self-gripping mesh has been compared with fibrin glue fixation. Law et al. reported similar results for both short- and long-term post-operative pain for the self-gripping Progrip mesh compared with glue fixation in TEP . Ferrarese et al. reported similar results for TAPP in an otherwise identical setup . There is only one published clinical trial on Adhesix™ in laparoscopic inguinal hernia surgery. In a prospective study by Tollens et al., the post-operative pain measured by visual analogue scale was mild at 1 month and long-term . The findings of our study are consistent with this and it can be concluded that Adhesix™ is comparable with glue fixation of mesh. Furthermore, the benefits of Adhesix™ may be similar to the benefits of glue fixation published in the literature, such as less early post-operative pain [ – ]. As the overall results of the glue-coated mesh and glue fixation of mesh are comparable, a potential advantage of the glue-coated mesh may be that additional time-consuming and expensive fixation is not needed. A discrepancy between our previous and current study regarding post-operative pain is not evident. However, one explanation may be that in an open procedure, the contact of inguinal nerves to the mesh is more extensive and in the early recovery period, micro grips cause more irritation than glue. There are two previous RCTs reporting post-operative NRS pain scores after TEP. Matikainen et al. reported NRS values of just below 2 at 1 week after surgery with TEP for unilateral primary hernias . Corresponding results were reported by Yildrim and Sahiner . The design of these studies was different than our study. In both studies, only patients with unilateral primary hernias were enrolled and they used polypropylene meshes (12 × 15 cm), which were not fixed at all. In the current study, low NRS scores, although slightly over 2, were noted at 1 week. The fact that approximately 60% of all hernias in our study were bilateral or recurrent reflects our routine practice, which according to the literature may lead to more post-operative pain . This may explain our slightly higher NRS values. However, some benefits regarding NRS values for Adhesix™ were noted. At day 1 and 2 after surgery, group P reported higher pain scores, although during the subsequent days the difference was no longer observed. Interestingly, this did not reflect the amount of analgesics used during the first week, indicating that this small difference in NRS values is not clinically meaningful. The reason for this difference is not evident. However, one explanation could be that after tissue contact Adhesix™ mesh softens very rapidly causing less irritation compared to rigid micro grips containing and stiff Progrip™ mesh. Although mean NRS after 3 months was low for both groups (0.8–1.3), some patients still experienced pain and discomfort for a longer time. A total of 28% in this trial reported some limitations in normal activities 3 months after surgery. When the numbers of participants experiencing moderate or severe pain during exercise at 3 months after surgery were considered, we found significantly more participants in group P than in group A (15.4% vs. 3.1%, p = 0.035). However, 5.6% in group P and 11.8% in group A reported NRS over 3 during exercise at 1 year, whereas at rest only 1.8–2.9% experienced intermediate or severe pain. As chronic pain is defined as at least moderate pain lasting more than 3 months after surgery and affecting daily activities , we suggest that the proportion above gives an indication on chronic pain rates in this study. In the RCT using Adhesix™ and Progrip™ in open inguinal hernia surgery for unilateral primary hernias, the corresponding chronic pain rates (NRS > 3) at 1 year after surgery were 5.2–7.4%, with no statistical difference between the groups . Other RCTs reported chronic pain rates of 4.5–11% one year after laparoscopic operations, which are consistent with our results [ – ]. The time to returning to normal activities and being fit for work were estimated by the participant in this trial, with no differences observed between groups. This is highly dependent on the participant’s own requirements and are not directly comparable between different trials. One RCT with TEP revealed approximately 14 days for returning to work and normal activities, whereas other RCTs reported 10–14 days after laparoscopic surgery and 12–19 days after open mesh repair to return to work and normal activities [ , , ]. In our study this took a few days longer, approximately 16 days, to reach working ability and normal daily activities. This discrepancy is probably explained at least in part by the different patient populations since, as mentioned earlier, most of our participants had bilateral or recurrent hernia. In contrast to the current study, in open hernia surgery using self-fixated mesh, the return to normal activities was significantly earlier in operations performed with Adhesix™ compared with Progrip™ (17 and 22 days, respectively) . This finding may indicate that self-gripping Progrip™ mesh may be more suitable for laparoscopic hernia operations. Operation time was shorter in group A (57 min) compared to group P (65 min). This difference in time may partly be explained of the fact that the Adhesix mesh unfolds when letting go of the mesh while the micro grips in Progrip mesh grab to each other and needs to manually be unfolded. Previous studies have shown rapid improvement in quality-of-life measures after laparoscopic surgery . In fact, patients experiencing more preoperative pain usually have more physical limitations before surgery and thus benefit the most in quality-of-life measurements . The RAND-36 Item Health Survey was used in this study. Significantly higher scores were reported at all physical areas already at 3 months after surgery, indicating rapid recovery after surgery. Similar conclusions were made after open surgery using the same meshes . In this study, in addition to pain-related contacts, approximately a tenth of participants contacted a physician due to some kind of post-operative adverse event. This was mostly due to post-operative swelling or a lump in the area caused by hematoma, seroma, or recurrence. Except for two recurrences that required reoperation, no other interventions were needed. Although bruising in the inguinal area or scrotum is very common after inguinal hernia surgery, the need for interventions due to bleeding or hematoma was rare. Other studies report re-operations due to these complications (approximately 1% after open surgery), while laparoscopic surgery does not seem to need re-operations [ , , ]. In our study, over one third of the participants suffered from bruising but no participants needed reoperation. Seroma formation is not rare after inguinal hernia surgery. The incidence of seroma is 7.2–38% after laparoscopic surgery, which is partly dependent on time of follow up . In the current study, 16% of patients reported fluid collections in the operated area during the first month. Two recurrences (1.3%) were observed in this study, which corresponds with other reports [ , , ]. However, it should be noted that incidence of recurrence increases over time . Obesity is a risk factor for hernia recurrence and also in this study higher BMI levels were noted in those operated for recurrent hernia. Particularly for the Adhesix™ mesh, no firm conclusions regarding recurrence rate can be made without longer follow up. This study has some limitations. First, many factors influence the experience of pain, and an objective measurement of pain is difficult. Therefore, we can find differences in some measures while other measures are similar. Second, most participants were not examined after surgery. Although some minor problems may therefore have been undetected, participants with any clinically relevant problem were evaluated in the outpatient clinic. Additionally, we have a national electronic health care database allowing follow up 1 year after surgery. Third, some participants did not participate in the follow up, which may have influenced the results. The strength of this study was its randomized controlled design, with comparable follow up between groups and a high response rate of 90%. Our hypothesis that glue-coated mesh causes less acute pain measured by the amount of pain medication used, based on the superiority of Adhesix™ in our earlier trial on open hernia surgery, could not be confirmed in a laparoscopic setting. For future research, long-term results after self-adhesive mesh have not yet been published and are therefore of great importance to be studied. As self-fixated mesh makes the operation more efficient it will be interesting to see if alternatives to glue or resorbable micro grips that can be developed and integrated in mesh fixation. Additionally, technical improvements, particularly robot assisted hernia surgery, show promising results, and its influence on post-operative pain is an important aspect to investigate in the future.
This trial found that self-adhesive Adhesix™ mesh was non-inferior to self-gripping Progrip™ mesh in laparoscopic surgery. Surgery with either self-fixated meshes led to rapid recovery and quality of life improvement.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 15 KB) Supplementary file2 (DOCX 15 KB) Supplementary file3 (DOCX 25 KB)
|
Intervention mapping for systematic development of a community-engaged CVD prevention intervention in ethnic and racial sexual minority men with HIV | db0438ac-8e59-4baa-a1c4-31290e6ed4d5 | 11904837 | Health Promotion[mh] | Introduction Cardiovascular disease (CVD) has been a leading cause of death in the United States for over 100 years ( ). As the primary cause of mortality, CVD claims a life every 34 s ( ). While untreated or poorly managed heart disease can result in life-altering consequences, its impact is disproportionately greater in historically marginalized communities ( ). Black and Latinx sexual minority men with HIV are more vulnerable to CVD. Compared to White and heterosexual peers, they carry a higher HIV burden ( ). That is driven by persistent intersecting structural and social disadvantages (e.g., discrimination and stigma related to sexual identity, race, and ethnicity) ( , , , ). This results in heightened CVD occurring from HIV-related chronic inflammation and treatment side effects ( ). These stressors may be the catalyst for engagement in CVD risk behaviors, such as smoking, substance use, and alcohol consumption ( ), that further perpetuate HIV-related CVD disparities ( ). Cardiovascular health (CVH) disparities (the differences in the prevalence, incidence, and outcomes of CVD among different demographic groups) in Black and Latinx sexual minority men are a pressing issue, and innovative prevention strategies are urgently needed ( , ). However, behavioral CVH interventions in this population have been limited, with insufficient uptake in marginalized communities ( , ). Extant research has not had sufficient sampling or data due to the majority of studies being less inclusive of minoritized populations ( , ). Intervention Mapping is one advantageous approach to address this issue, as it engages community members or patients as key partners in the program planning process ( ). Intervention Mapping is a rigorous, evidence-based, and reproducible approach to the development of culturally salient interventions and may be of great benefit in populations who have been historically underrepresented in research ( ). The purpose of this study was to map a CVD prevention intervention for Black and Latinx sexual minority men with HIV.
Methods Intervention Mapping is a framework for planning a theory- and evidence-based health promotion program in an iterative, stepwise process ( , , ). It integrates theoretical knowledge, empirical evidence from scholarly research, and insights from the priority population ( , ). It consists of six key steps: (1) assessing community needs to establish a logic model of the problem; (2) identifying expected program outcomes and objectives to create a logic model of change; (3) selecting theory-based methods and practical strategies to design the program; (4) producing program components; (5) planning for implementation; and (6) planning for evaluation ( , ). Its structured, detailed protocol facilitates decision-making for program planners ( , , ). In this study, we focused on the formative first three steps to describe the approach we used to develop a behavioral intervention for CVD prevention in Black and Latinx sexual minority men living with HIV. 2.1 Step 1: needs assessment The first step provides the context for intervention development by assessing health-related problems and their behavioral and environmental causes. In this study, the needs assessment was based on (1) a literature review, (2) the development of a framework, and (3) interviews with the local community. 2.1.1 Literature review We conducted a scoping review of the published literature on non-pharmacological behavioral or lifestyle interventions for CVD prevention among adults living with HIV ( ). The review was conducted in collaboration with a research librarian and guided by the Joanna Briggs Institute Manual for Evidence Synthesis. While our primary focus was on the prevention of hypertension—a leading CVD risk factor—we adopted a comprehensive approach by incorporating literature that explored behavioral and lifestyle strategies relevant to hypertension prevention, alongside studies that specifically targeted hypertension as an outcome. Studies that combined pharmacological management with behavioral interventions were excluded. Details of the study are found elsewhere ( ). 2.1.2 Framework development Medical distrust is a longstanding obstacle to engagement in research and patient-clinician trust. This is a result of systemic racism, stigma, and discrimination to individuals based on their race, ethnicity, gender identity, and sexual orientation. Distrust of the health care system impedes access to care, adherence to medical treatment, and participation in health research. After reviewing the literature on technology-driven behavioral interventions for sexual minoritized individuals, we developed a stepwise e-Health framework that could be used to extend the reach of behavioral interventions into populations that are the most health disparate, while also acknowledging reasons for lack of trust and the need for increased privacy in empowering and non-stigmatizing ways ( ). 2.1.3 Local needs assessment This research was part of a larger exploratory sequential mixed-methods study ( ). We partnered with two community-based organizations in New York City that are dedicated to addressing health equity and supporting ethnic, racial, and socioeconomically minoritized populations. We conducted a local needs assessment using a community-engaged approach to develop a culturally salient CVD prevention intervention for Black and Latinx sexual minority men living with HIV. This community participatory approach is recognized as effective in enhancing the engagement of individuals who are marginalized and stigmatized due to their intersecting identities ( , ). The needs assessment involved both quantitative and qualitative methods through survey administration, focus groups, and semi-structured interviews. Data collection using triangulated methods helps gain a comprehensive understanding of complex phenomena, especially in minoritized and underrepresented populations ( ). 2.1.3.1 Quantitative assessment In the quantitative phase, validated survey measures were administered to 30 Black and Latinx sexual minority men who were members of the community-based organization. We assessed perceptions of living with chronic conditions such as HIV and comorbid hypertension and diabetes. We also assessed modifiable CVD risk behaviors, such as physical activity, tobacco, and e-cigarette use. Details about the descriptive study can be found elsewhere ( ). 2.1.3.2 Qualitative assessment In the qualitative phase, we conducted focus groups with 10 HIV community experts to explore community-informed perceptions of barriers and facilitators to CVD prevention. Additionally, 30 community members who completed the survey and demographic information participated in qualitative semi-structured interviews immediately afterward. This article focuses specifically on those interviews with community members. 2.1.3.2.1 Study design We conducted semi-structured interviews using Zoom Video Communications, Inc. version 1.5, San Jose, USA, with 30 community members. The purpose of this study was to gain a deeper understanding of health concerns, HIV-related comorbid chronic conditions, and barriers and facilitators to CVD prevention. The development of this protocol was reported using the Standards for Reporting Qualitative Research ( ). 2.1.3.2.2 Ethics approval The study was approved by the New York University Institutional Review Board in February 2021 (IRB-FY2021-4772) and the Yale University Institutional Review Board on May 27, 2022 (#2000031577). All procedures were in accordance with the ethical standards of the institutional and national research committee, the 1964 Helsinki Declaration and its later amendments, or comparable ethical standards. Informed consent was obtained verbally from all eligible participants who agreed to participate, given the sample characteristics of HIV diagnosis and non-heterosexual identity, as well as the minimal risk nature of the study. Each participant was compensated with a USD $45 Visa gift card as appreciation for their time. 2.1.3.2.3 Participants recruitment Recruitment strategies included word of mouth from program managers, digital flyers, and snowball sampling. Eligibility criteria were: (1) self-identifying as non-heterosexual male, (2) age 30 to 65, (3) identifying as from an ethnic or racial minoritized background, (4) HIV serostatus positive, (5) access to the internet, and (6) receiving services from a partnering community-based organization. Interested individuals who met these criteria were screened, consented, and enrolled. 2.1.3.2.4 Data collection A semi-structured interview guide included five open-ended content questions, such as “How can we improve the ways that we engage communities of color in health promotion using technology to prevent heart disease?” and “Tell me about any medical conditions other than HIV that you might be concerned about,” supplemented with probes. The interviews were conducted from May 2021 to October 2022, with each session lasting approximately 45–90 min. Participants reported that they were in a location where they felt comfortable conducting the interview. We audio-recorded every interview and assigned a pseudonym to each participant throughout the process to protect their privacy, recognizing the importance of such measures when engaging marginalized groups ( , ). Community members were interviewed in their preferred languages, ensuring respect for their cultural values and enhancing the accuracy of the data collected ( ). The principal investigator (SRR), who has extensive experience in qualitative research, conducted the interviews in English. For participants preferring to be interviewed in another language, a professional translator, approved by the community-based organizations, simultaneously interpreted the interviews in Spanish or Haitian Creole. Data saturation was achieved when no new themes emerged during the final interviews, ensuring a comprehensive exploration of participants’ experiences. 2.1.3.2.5 Data analysis Data analysis followed a five-step procedure using NVivo version 14 software ( , ). Three authors, BK, LC, and SRR, analyzed the interview data using thematic analysis. Initially, collected data were organized and prepared for analysis. The interviews were transcribed verbatim by a certified transcription company, and the files were securely saved on a password-protected University cloud server. Subsequently, all the transcribed interviews were initially read through multiple times, with researchers taking notes to immerse themselves in the data and gain a general sense of the information. Two coders, BK and LC, then coded the data by bracketing significant words or phrases into meaning units and identifying representative categories. Half of the interview transcripts were multiply coded by both coders to enhance inter-rater reliability and consistency in coding ( ). During regular meetings, authors discussed and resolved coding discrepancies, iteratively refining categories until a consensus was reached and a codebook was created. From this coding process, the authors generated themes, identifying repeatedly emerging major ideas across the transcripts. Lastly, these themes were interpreted and represented. 2.1.3.2.6 Methodological rigor Methodological rigor was ensured by adhering to four key criteria ( ). First, credibility was established with a detailed interview guide and the investigator’s expertise in qualitative research methods. Credibility was further reinforced by conducting peer debriefings and member checking to ensure the data were interpreted accurately. Second, dependability was demonstrated through comprehensive descriptions of the research methods in a published study protocol ( ). It was also supported by maintaining an audit trail that included a semi-structured interview guide, audio recordings, and professionally transcribed interviews. Data storage, organization, and analysis using qualitative research software facilitated stepwise replication of the study findings. Regarding data analysis, final interpretations were achieved through extensive deliberations among the three authors (BK, LC, and SRR). To ensure intercoder reliability, Cohen’s Kappa coefficient was also calculated using the NVivo version 14 software to measure the degree of agreement between coders, thereby minimizing individual biases ( ). Third, transferability was supported by recruiting participants from two distinct community-based organizations. Achieving data saturation and observing consistent results across a diverse group of participants enhances the potential for the study findings to be transferable to other marginalized populations with chronic conditions. Lastly, to ensure confirmability, investigators maintained an attitude of openness to understanding participants’ lived experiences related to the intersectionality of their racial, ethnic, and sexual minoritized identities, as well as their perceptions of chronic conditions and CVD prevention, while considering their own positionality and reflexivity. 2.2 Step 2. Identifying expected program outcomes and objectives Following the needs assessment, we determined the expected behavioral and environmental outcomes and determinants based on empirical literature, applicable theories, and qualitative research findings from the semi-structured interviews conducted in this study ( , ). The expected program outcomes were differentiated into performance objectives (POs), which detailed the specific behaviors or sub-behaviors that the participants need to perform to achieve the desired outcomes ( ). For each PO, the changeable determinants of behavior were selected. We also formulated change objectives (COs), which addressed changing the particular aspects of behavioral determinants so that participants are enabled to meet the POs ( ). We presented the desired outcomes and determinants by creating a logic of model of change. COs of identified determinants were specified for an associated PO in a matrix of objectives. 2.3 Step 3. Selecting theory-based methods and practical strategies In step 3, program developers select change methods that are grounded in theory and then choose practical strategies ( , ). Theoretical methods refer to techniques used to achieve behavior change in line with program objectives by influencing determinants, whereas practical strategies are specific applications that deliver these methods ( , ). The relevant theories guiding the program design were selected using three approaches: (1) reviewing previous literature on the relevant topical areas (issue approach), (2) brainstorming theoretical constructs related to interested behavior (content approach), and (3) identifying frequently used theories (general theory approach) ( ). The selected theoretical methods were then implemented as practical strategies that best meet the needs of the target populations and the contexts of intervention delivery ( , ).
Step 1: needs assessment The first step provides the context for intervention development by assessing health-related problems and their behavioral and environmental causes. In this study, the needs assessment was based on (1) a literature review, (2) the development of a framework, and (3) interviews with the local community. 2.1.1 Literature review We conducted a scoping review of the published literature on non-pharmacological behavioral or lifestyle interventions for CVD prevention among adults living with HIV ( ). The review was conducted in collaboration with a research librarian and guided by the Joanna Briggs Institute Manual for Evidence Synthesis. While our primary focus was on the prevention of hypertension—a leading CVD risk factor—we adopted a comprehensive approach by incorporating literature that explored behavioral and lifestyle strategies relevant to hypertension prevention, alongside studies that specifically targeted hypertension as an outcome. Studies that combined pharmacological management with behavioral interventions were excluded. Details of the study are found elsewhere ( ). 2.1.2 Framework development Medical distrust is a longstanding obstacle to engagement in research and patient-clinician trust. This is a result of systemic racism, stigma, and discrimination to individuals based on their race, ethnicity, gender identity, and sexual orientation. Distrust of the health care system impedes access to care, adherence to medical treatment, and participation in health research. After reviewing the literature on technology-driven behavioral interventions for sexual minoritized individuals, we developed a stepwise e-Health framework that could be used to extend the reach of behavioral interventions into populations that are the most health disparate, while also acknowledging reasons for lack of trust and the need for increased privacy in empowering and non-stigmatizing ways ( ). 2.1.3 Local needs assessment This research was part of a larger exploratory sequential mixed-methods study ( ). We partnered with two community-based organizations in New York City that are dedicated to addressing health equity and supporting ethnic, racial, and socioeconomically minoritized populations. We conducted a local needs assessment using a community-engaged approach to develop a culturally salient CVD prevention intervention for Black and Latinx sexual minority men living with HIV. This community participatory approach is recognized as effective in enhancing the engagement of individuals who are marginalized and stigmatized due to their intersecting identities ( , ). The needs assessment involved both quantitative and qualitative methods through survey administration, focus groups, and semi-structured interviews. Data collection using triangulated methods helps gain a comprehensive understanding of complex phenomena, especially in minoritized and underrepresented populations ( ). 2.1.3.1 Quantitative assessment In the quantitative phase, validated survey measures were administered to 30 Black and Latinx sexual minority men who were members of the community-based organization. We assessed perceptions of living with chronic conditions such as HIV and comorbid hypertension and diabetes. We also assessed modifiable CVD risk behaviors, such as physical activity, tobacco, and e-cigarette use. Details about the descriptive study can be found elsewhere ( ). 2.1.3.2 Qualitative assessment In the qualitative phase, we conducted focus groups with 10 HIV community experts to explore community-informed perceptions of barriers and facilitators to CVD prevention. Additionally, 30 community members who completed the survey and demographic information participated in qualitative semi-structured interviews immediately afterward. This article focuses specifically on those interviews with community members. 2.1.3.2.1 Study design We conducted semi-structured interviews using Zoom Video Communications, Inc. version 1.5, San Jose, USA, with 30 community members. The purpose of this study was to gain a deeper understanding of health concerns, HIV-related comorbid chronic conditions, and barriers and facilitators to CVD prevention. The development of this protocol was reported using the Standards for Reporting Qualitative Research ( ). 2.1.3.2.2 Ethics approval The study was approved by the New York University Institutional Review Board in February 2021 (IRB-FY2021-4772) and the Yale University Institutional Review Board on May 27, 2022 (#2000031577). All procedures were in accordance with the ethical standards of the institutional and national research committee, the 1964 Helsinki Declaration and its later amendments, or comparable ethical standards. Informed consent was obtained verbally from all eligible participants who agreed to participate, given the sample characteristics of HIV diagnosis and non-heterosexual identity, as well as the minimal risk nature of the study. Each participant was compensated with a USD $45 Visa gift card as appreciation for their time. 2.1.3.2.3 Participants recruitment Recruitment strategies included word of mouth from program managers, digital flyers, and snowball sampling. Eligibility criteria were: (1) self-identifying as non-heterosexual male, (2) age 30 to 65, (3) identifying as from an ethnic or racial minoritized background, (4) HIV serostatus positive, (5) access to the internet, and (6) receiving services from a partnering community-based organization. Interested individuals who met these criteria were screened, consented, and enrolled. 2.1.3.2.4 Data collection A semi-structured interview guide included five open-ended content questions, such as “How can we improve the ways that we engage communities of color in health promotion using technology to prevent heart disease?” and “Tell me about any medical conditions other than HIV that you might be concerned about,” supplemented with probes. The interviews were conducted from May 2021 to October 2022, with each session lasting approximately 45–90 min. Participants reported that they were in a location where they felt comfortable conducting the interview. We audio-recorded every interview and assigned a pseudonym to each participant throughout the process to protect their privacy, recognizing the importance of such measures when engaging marginalized groups ( , ). Community members were interviewed in their preferred languages, ensuring respect for their cultural values and enhancing the accuracy of the data collected ( ). The principal investigator (SRR), who has extensive experience in qualitative research, conducted the interviews in English. For participants preferring to be interviewed in another language, a professional translator, approved by the community-based organizations, simultaneously interpreted the interviews in Spanish or Haitian Creole. Data saturation was achieved when no new themes emerged during the final interviews, ensuring a comprehensive exploration of participants’ experiences. 2.1.3.2.5 Data analysis Data analysis followed a five-step procedure using NVivo version 14 software ( , ). Three authors, BK, LC, and SRR, analyzed the interview data using thematic analysis. Initially, collected data were organized and prepared for analysis. The interviews were transcribed verbatim by a certified transcription company, and the files were securely saved on a password-protected University cloud server. Subsequently, all the transcribed interviews were initially read through multiple times, with researchers taking notes to immerse themselves in the data and gain a general sense of the information. Two coders, BK and LC, then coded the data by bracketing significant words or phrases into meaning units and identifying representative categories. Half of the interview transcripts were multiply coded by both coders to enhance inter-rater reliability and consistency in coding ( ). During regular meetings, authors discussed and resolved coding discrepancies, iteratively refining categories until a consensus was reached and a codebook was created. From this coding process, the authors generated themes, identifying repeatedly emerging major ideas across the transcripts. Lastly, these themes were interpreted and represented. 2.1.3.2.6 Methodological rigor Methodological rigor was ensured by adhering to four key criteria ( ). First, credibility was established with a detailed interview guide and the investigator’s expertise in qualitative research methods. Credibility was further reinforced by conducting peer debriefings and member checking to ensure the data were interpreted accurately. Second, dependability was demonstrated through comprehensive descriptions of the research methods in a published study protocol ( ). It was also supported by maintaining an audit trail that included a semi-structured interview guide, audio recordings, and professionally transcribed interviews. Data storage, organization, and analysis using qualitative research software facilitated stepwise replication of the study findings. Regarding data analysis, final interpretations were achieved through extensive deliberations among the three authors (BK, LC, and SRR). To ensure intercoder reliability, Cohen’s Kappa coefficient was also calculated using the NVivo version 14 software to measure the degree of agreement between coders, thereby minimizing individual biases ( ). Third, transferability was supported by recruiting participants from two distinct community-based organizations. Achieving data saturation and observing consistent results across a diverse group of participants enhances the potential for the study findings to be transferable to other marginalized populations with chronic conditions. Lastly, to ensure confirmability, investigators maintained an attitude of openness to understanding participants’ lived experiences related to the intersectionality of their racial, ethnic, and sexual minoritized identities, as well as their perceptions of chronic conditions and CVD prevention, while considering their own positionality and reflexivity.
Literature review We conducted a scoping review of the published literature on non-pharmacological behavioral or lifestyle interventions for CVD prevention among adults living with HIV ( ). The review was conducted in collaboration with a research librarian and guided by the Joanna Briggs Institute Manual for Evidence Synthesis. While our primary focus was on the prevention of hypertension—a leading CVD risk factor—we adopted a comprehensive approach by incorporating literature that explored behavioral and lifestyle strategies relevant to hypertension prevention, alongside studies that specifically targeted hypertension as an outcome. Studies that combined pharmacological management with behavioral interventions were excluded. Details of the study are found elsewhere ( ).
Framework development Medical distrust is a longstanding obstacle to engagement in research and patient-clinician trust. This is a result of systemic racism, stigma, and discrimination to individuals based on their race, ethnicity, gender identity, and sexual orientation. Distrust of the health care system impedes access to care, adherence to medical treatment, and participation in health research. After reviewing the literature on technology-driven behavioral interventions for sexual minoritized individuals, we developed a stepwise e-Health framework that could be used to extend the reach of behavioral interventions into populations that are the most health disparate, while also acknowledging reasons for lack of trust and the need for increased privacy in empowering and non-stigmatizing ways ( ).
Local needs assessment This research was part of a larger exploratory sequential mixed-methods study ( ). We partnered with two community-based organizations in New York City that are dedicated to addressing health equity and supporting ethnic, racial, and socioeconomically minoritized populations. We conducted a local needs assessment using a community-engaged approach to develop a culturally salient CVD prevention intervention for Black and Latinx sexual minority men living with HIV. This community participatory approach is recognized as effective in enhancing the engagement of individuals who are marginalized and stigmatized due to their intersecting identities ( , ). The needs assessment involved both quantitative and qualitative methods through survey administration, focus groups, and semi-structured interviews. Data collection using triangulated methods helps gain a comprehensive understanding of complex phenomena, especially in minoritized and underrepresented populations ( ). 2.1.3.1 Quantitative assessment In the quantitative phase, validated survey measures were administered to 30 Black and Latinx sexual minority men who were members of the community-based organization. We assessed perceptions of living with chronic conditions such as HIV and comorbid hypertension and diabetes. We also assessed modifiable CVD risk behaviors, such as physical activity, tobacco, and e-cigarette use. Details about the descriptive study can be found elsewhere ( ). 2.1.3.2 Qualitative assessment In the qualitative phase, we conducted focus groups with 10 HIV community experts to explore community-informed perceptions of barriers and facilitators to CVD prevention. Additionally, 30 community members who completed the survey and demographic information participated in qualitative semi-structured interviews immediately afterward. This article focuses specifically on those interviews with community members. 2.1.3.2.1 Study design We conducted semi-structured interviews using Zoom Video Communications, Inc. version 1.5, San Jose, USA, with 30 community members. The purpose of this study was to gain a deeper understanding of health concerns, HIV-related comorbid chronic conditions, and barriers and facilitators to CVD prevention. The development of this protocol was reported using the Standards for Reporting Qualitative Research ( ). 2.1.3.2.2 Ethics approval The study was approved by the New York University Institutional Review Board in February 2021 (IRB-FY2021-4772) and the Yale University Institutional Review Board on May 27, 2022 (#2000031577). All procedures were in accordance with the ethical standards of the institutional and national research committee, the 1964 Helsinki Declaration and its later amendments, or comparable ethical standards. Informed consent was obtained verbally from all eligible participants who agreed to participate, given the sample characteristics of HIV diagnosis and non-heterosexual identity, as well as the minimal risk nature of the study. Each participant was compensated with a USD $45 Visa gift card as appreciation for their time. 2.1.3.2.3 Participants recruitment Recruitment strategies included word of mouth from program managers, digital flyers, and snowball sampling. Eligibility criteria were: (1) self-identifying as non-heterosexual male, (2) age 30 to 65, (3) identifying as from an ethnic or racial minoritized background, (4) HIV serostatus positive, (5) access to the internet, and (6) receiving services from a partnering community-based organization. Interested individuals who met these criteria were screened, consented, and enrolled. 2.1.3.2.4 Data collection A semi-structured interview guide included five open-ended content questions, such as “How can we improve the ways that we engage communities of color in health promotion using technology to prevent heart disease?” and “Tell me about any medical conditions other than HIV that you might be concerned about,” supplemented with probes. The interviews were conducted from May 2021 to October 2022, with each session lasting approximately 45–90 min. Participants reported that they were in a location where they felt comfortable conducting the interview. We audio-recorded every interview and assigned a pseudonym to each participant throughout the process to protect their privacy, recognizing the importance of such measures when engaging marginalized groups ( , ). Community members were interviewed in their preferred languages, ensuring respect for their cultural values and enhancing the accuracy of the data collected ( ). The principal investigator (SRR), who has extensive experience in qualitative research, conducted the interviews in English. For participants preferring to be interviewed in another language, a professional translator, approved by the community-based organizations, simultaneously interpreted the interviews in Spanish or Haitian Creole. Data saturation was achieved when no new themes emerged during the final interviews, ensuring a comprehensive exploration of participants’ experiences. 2.1.3.2.5 Data analysis Data analysis followed a five-step procedure using NVivo version 14 software ( , ). Three authors, BK, LC, and SRR, analyzed the interview data using thematic analysis. Initially, collected data were organized and prepared for analysis. The interviews were transcribed verbatim by a certified transcription company, and the files were securely saved on a password-protected University cloud server. Subsequently, all the transcribed interviews were initially read through multiple times, with researchers taking notes to immerse themselves in the data and gain a general sense of the information. Two coders, BK and LC, then coded the data by bracketing significant words or phrases into meaning units and identifying representative categories. Half of the interview transcripts were multiply coded by both coders to enhance inter-rater reliability and consistency in coding ( ). During regular meetings, authors discussed and resolved coding discrepancies, iteratively refining categories until a consensus was reached and a codebook was created. From this coding process, the authors generated themes, identifying repeatedly emerging major ideas across the transcripts. Lastly, these themes were interpreted and represented. 2.1.3.2.6 Methodological rigor Methodological rigor was ensured by adhering to four key criteria ( ). First, credibility was established with a detailed interview guide and the investigator’s expertise in qualitative research methods. Credibility was further reinforced by conducting peer debriefings and member checking to ensure the data were interpreted accurately. Second, dependability was demonstrated through comprehensive descriptions of the research methods in a published study protocol ( ). It was also supported by maintaining an audit trail that included a semi-structured interview guide, audio recordings, and professionally transcribed interviews. Data storage, organization, and analysis using qualitative research software facilitated stepwise replication of the study findings. Regarding data analysis, final interpretations were achieved through extensive deliberations among the three authors (BK, LC, and SRR). To ensure intercoder reliability, Cohen’s Kappa coefficient was also calculated using the NVivo version 14 software to measure the degree of agreement between coders, thereby minimizing individual biases ( ). Third, transferability was supported by recruiting participants from two distinct community-based organizations. Achieving data saturation and observing consistent results across a diverse group of participants enhances the potential for the study findings to be transferable to other marginalized populations with chronic conditions. Lastly, to ensure confirmability, investigators maintained an attitude of openness to understanding participants’ lived experiences related to the intersectionality of their racial, ethnic, and sexual minoritized identities, as well as their perceptions of chronic conditions and CVD prevention, while considering their own positionality and reflexivity.
Quantitative assessment In the quantitative phase, validated survey measures were administered to 30 Black and Latinx sexual minority men who were members of the community-based organization. We assessed perceptions of living with chronic conditions such as HIV and comorbid hypertension and diabetes. We also assessed modifiable CVD risk behaviors, such as physical activity, tobacco, and e-cigarette use. Details about the descriptive study can be found elsewhere ( ).
Qualitative assessment In the qualitative phase, we conducted focus groups with 10 HIV community experts to explore community-informed perceptions of barriers and facilitators to CVD prevention. Additionally, 30 community members who completed the survey and demographic information participated in qualitative semi-structured interviews immediately afterward. This article focuses specifically on those interviews with community members. 2.1.3.2.1 Study design We conducted semi-structured interviews using Zoom Video Communications, Inc. version 1.5, San Jose, USA, with 30 community members. The purpose of this study was to gain a deeper understanding of health concerns, HIV-related comorbid chronic conditions, and barriers and facilitators to CVD prevention. The development of this protocol was reported using the Standards for Reporting Qualitative Research ( ). 2.1.3.2.2 Ethics approval The study was approved by the New York University Institutional Review Board in February 2021 (IRB-FY2021-4772) and the Yale University Institutional Review Board on May 27, 2022 (#2000031577). All procedures were in accordance with the ethical standards of the institutional and national research committee, the 1964 Helsinki Declaration and its later amendments, or comparable ethical standards. Informed consent was obtained verbally from all eligible participants who agreed to participate, given the sample characteristics of HIV diagnosis and non-heterosexual identity, as well as the minimal risk nature of the study. Each participant was compensated with a USD $45 Visa gift card as appreciation for their time. 2.1.3.2.3 Participants recruitment Recruitment strategies included word of mouth from program managers, digital flyers, and snowball sampling. Eligibility criteria were: (1) self-identifying as non-heterosexual male, (2) age 30 to 65, (3) identifying as from an ethnic or racial minoritized background, (4) HIV serostatus positive, (5) access to the internet, and (6) receiving services from a partnering community-based organization. Interested individuals who met these criteria were screened, consented, and enrolled. 2.1.3.2.4 Data collection A semi-structured interview guide included five open-ended content questions, such as “How can we improve the ways that we engage communities of color in health promotion using technology to prevent heart disease?” and “Tell me about any medical conditions other than HIV that you might be concerned about,” supplemented with probes. The interviews were conducted from May 2021 to October 2022, with each session lasting approximately 45–90 min. Participants reported that they were in a location where they felt comfortable conducting the interview. We audio-recorded every interview and assigned a pseudonym to each participant throughout the process to protect their privacy, recognizing the importance of such measures when engaging marginalized groups ( , ). Community members were interviewed in their preferred languages, ensuring respect for their cultural values and enhancing the accuracy of the data collected ( ). The principal investigator (SRR), who has extensive experience in qualitative research, conducted the interviews in English. For participants preferring to be interviewed in another language, a professional translator, approved by the community-based organizations, simultaneously interpreted the interviews in Spanish or Haitian Creole. Data saturation was achieved when no new themes emerged during the final interviews, ensuring a comprehensive exploration of participants’ experiences. 2.1.3.2.5 Data analysis Data analysis followed a five-step procedure using NVivo version 14 software ( , ). Three authors, BK, LC, and SRR, analyzed the interview data using thematic analysis. Initially, collected data were organized and prepared for analysis. The interviews were transcribed verbatim by a certified transcription company, and the files were securely saved on a password-protected University cloud server. Subsequently, all the transcribed interviews were initially read through multiple times, with researchers taking notes to immerse themselves in the data and gain a general sense of the information. Two coders, BK and LC, then coded the data by bracketing significant words or phrases into meaning units and identifying representative categories. Half of the interview transcripts were multiply coded by both coders to enhance inter-rater reliability and consistency in coding ( ). During regular meetings, authors discussed and resolved coding discrepancies, iteratively refining categories until a consensus was reached and a codebook was created. From this coding process, the authors generated themes, identifying repeatedly emerging major ideas across the transcripts. Lastly, these themes were interpreted and represented. 2.1.3.2.6 Methodological rigor Methodological rigor was ensured by adhering to four key criteria ( ). First, credibility was established with a detailed interview guide and the investigator’s expertise in qualitative research methods. Credibility was further reinforced by conducting peer debriefings and member checking to ensure the data were interpreted accurately. Second, dependability was demonstrated through comprehensive descriptions of the research methods in a published study protocol ( ). It was also supported by maintaining an audit trail that included a semi-structured interview guide, audio recordings, and professionally transcribed interviews. Data storage, organization, and analysis using qualitative research software facilitated stepwise replication of the study findings. Regarding data analysis, final interpretations were achieved through extensive deliberations among the three authors (BK, LC, and SRR). To ensure intercoder reliability, Cohen’s Kappa coefficient was also calculated using the NVivo version 14 software to measure the degree of agreement between coders, thereby minimizing individual biases ( ). Third, transferability was supported by recruiting participants from two distinct community-based organizations. Achieving data saturation and observing consistent results across a diverse group of participants enhances the potential for the study findings to be transferable to other marginalized populations with chronic conditions. Lastly, to ensure confirmability, investigators maintained an attitude of openness to understanding participants’ lived experiences related to the intersectionality of their racial, ethnic, and sexual minoritized identities, as well as their perceptions of chronic conditions and CVD prevention, while considering their own positionality and reflexivity.
Study design We conducted semi-structured interviews using Zoom Video Communications, Inc. version 1.5, San Jose, USA, with 30 community members. The purpose of this study was to gain a deeper understanding of health concerns, HIV-related comorbid chronic conditions, and barriers and facilitators to CVD prevention. The development of this protocol was reported using the Standards for Reporting Qualitative Research ( ).
Ethics approval The study was approved by the New York University Institutional Review Board in February 2021 (IRB-FY2021-4772) and the Yale University Institutional Review Board on May 27, 2022 (#2000031577). All procedures were in accordance with the ethical standards of the institutional and national research committee, the 1964 Helsinki Declaration and its later amendments, or comparable ethical standards. Informed consent was obtained verbally from all eligible participants who agreed to participate, given the sample characteristics of HIV diagnosis and non-heterosexual identity, as well as the minimal risk nature of the study. Each participant was compensated with a USD $45 Visa gift card as appreciation for their time.
Participants recruitment Recruitment strategies included word of mouth from program managers, digital flyers, and snowball sampling. Eligibility criteria were: (1) self-identifying as non-heterosexual male, (2) age 30 to 65, (3) identifying as from an ethnic or racial minoritized background, (4) HIV serostatus positive, (5) access to the internet, and (6) receiving services from a partnering community-based organization. Interested individuals who met these criteria were screened, consented, and enrolled.
Data collection A semi-structured interview guide included five open-ended content questions, such as “How can we improve the ways that we engage communities of color in health promotion using technology to prevent heart disease?” and “Tell me about any medical conditions other than HIV that you might be concerned about,” supplemented with probes. The interviews were conducted from May 2021 to October 2022, with each session lasting approximately 45–90 min. Participants reported that they were in a location where they felt comfortable conducting the interview. We audio-recorded every interview and assigned a pseudonym to each participant throughout the process to protect their privacy, recognizing the importance of such measures when engaging marginalized groups ( , ). Community members were interviewed in their preferred languages, ensuring respect for their cultural values and enhancing the accuracy of the data collected ( ). The principal investigator (SRR), who has extensive experience in qualitative research, conducted the interviews in English. For participants preferring to be interviewed in another language, a professional translator, approved by the community-based organizations, simultaneously interpreted the interviews in Spanish or Haitian Creole. Data saturation was achieved when no new themes emerged during the final interviews, ensuring a comprehensive exploration of participants’ experiences.
Data analysis Data analysis followed a five-step procedure using NVivo version 14 software ( , ). Three authors, BK, LC, and SRR, analyzed the interview data using thematic analysis. Initially, collected data were organized and prepared for analysis. The interviews were transcribed verbatim by a certified transcription company, and the files were securely saved on a password-protected University cloud server. Subsequently, all the transcribed interviews were initially read through multiple times, with researchers taking notes to immerse themselves in the data and gain a general sense of the information. Two coders, BK and LC, then coded the data by bracketing significant words or phrases into meaning units and identifying representative categories. Half of the interview transcripts were multiply coded by both coders to enhance inter-rater reliability and consistency in coding ( ). During regular meetings, authors discussed and resolved coding discrepancies, iteratively refining categories until a consensus was reached and a codebook was created. From this coding process, the authors generated themes, identifying repeatedly emerging major ideas across the transcripts. Lastly, these themes were interpreted and represented.
Methodological rigor Methodological rigor was ensured by adhering to four key criteria ( ). First, credibility was established with a detailed interview guide and the investigator’s expertise in qualitative research methods. Credibility was further reinforced by conducting peer debriefings and member checking to ensure the data were interpreted accurately. Second, dependability was demonstrated through comprehensive descriptions of the research methods in a published study protocol ( ). It was also supported by maintaining an audit trail that included a semi-structured interview guide, audio recordings, and professionally transcribed interviews. Data storage, organization, and analysis using qualitative research software facilitated stepwise replication of the study findings. Regarding data analysis, final interpretations were achieved through extensive deliberations among the three authors (BK, LC, and SRR). To ensure intercoder reliability, Cohen’s Kappa coefficient was also calculated using the NVivo version 14 software to measure the degree of agreement between coders, thereby minimizing individual biases ( ). Third, transferability was supported by recruiting participants from two distinct community-based organizations. Achieving data saturation and observing consistent results across a diverse group of participants enhances the potential for the study findings to be transferable to other marginalized populations with chronic conditions. Lastly, to ensure confirmability, investigators maintained an attitude of openness to understanding participants’ lived experiences related to the intersectionality of their racial, ethnic, and sexual minoritized identities, as well as their perceptions of chronic conditions and CVD prevention, while considering their own positionality and reflexivity.
Step 2. Identifying expected program outcomes and objectives Following the needs assessment, we determined the expected behavioral and environmental outcomes and determinants based on empirical literature, applicable theories, and qualitative research findings from the semi-structured interviews conducted in this study ( , ). The expected program outcomes were differentiated into performance objectives (POs), which detailed the specific behaviors or sub-behaviors that the participants need to perform to achieve the desired outcomes ( ). For each PO, the changeable determinants of behavior were selected. We also formulated change objectives (COs), which addressed changing the particular aspects of behavioral determinants so that participants are enabled to meet the POs ( ). We presented the desired outcomes and determinants by creating a logic of model of change. COs of identified determinants were specified for an associated PO in a matrix of objectives.
Step 3. Selecting theory-based methods and practical strategies In step 3, program developers select change methods that are grounded in theory and then choose practical strategies ( , ). Theoretical methods refer to techniques used to achieve behavior change in line with program objectives by influencing determinants, whereas practical strategies are specific applications that deliver these methods ( , ). The relevant theories guiding the program design were selected using three approaches: (1) reviewing previous literature on the relevant topical areas (issue approach), (2) brainstorming theoretical constructs related to interested behavior (content approach), and (3) identifying frequently used theories (general theory approach) ( ). The selected theoretical methods were then implemented as practical strategies that best meet the needs of the target populations and the contexts of intervention delivery ( , ).
Results Systematic approaches, guided by Intervention Mapping steps, facilitated theory- and evidence-based decision-making throughout the intervention development process. We focused on the first three of six steps: (1) establishing a logic model of the problem through a needs assessment; (2) developing a logic model of change by identifying expected program outcomes and objectives; (3) selecting theory-based methods and practical strategies for program design. The findings from each of these Intervention Mapping steps are described as follows. 3.1 Step 1. Logic model of the problem 3.1.1 Literature review We conducted a scoping review on behavioral interventions for CVD prevention among adults living with HIV ( ). It highlighted a growing emphasis on non-pharmacological, multicomponent approaches addressing lifestyle CVD risk factors, such as physical activity, diet, and weight management. Most US studies focused on the Southeast, which suggested that future research should extend to cover geographic regions that have been underrepresented and include a more comprehensive range of populations at elevated CVD risk. Details of the full review can be found elsewhere ( ). 3.1.2 Framework development We have presented an innovative eHealth technology framework to shift the existing paradigm of medical distrust among sexual minority men of color in a stepwise and multi-construct approach ( ). Our framework was developed in multidisciplinary collaboration with leaders in nursing, public health, and bioethics. The framework illustrates how eHealth interventions encourage engagement through the adoption and use of technology, anonymity, co-presence, self-disclosure, and social support to foster trustworthiness and trust in healthcare. We proposed the use of two eHealth modalities: (1) a virtual environment and (2) avatar-led videos (i.e., computer-generated, three-dimensional online spaces and human-like digital representations). These technologies provide private, interactive platforms that empower individuals and improve access to reliable health information, thereby promoting health behaviors in sexual minority men from racial and ethnic minority communities with chronic conditions. 3.1.3 Local needs assessment 3.1.3.1 Quantitative assessment Quantitative assessment using validated survey measures revealed that most participants perceived their conditions as manageable yet serious and reported that the associated symptoms were complex. More than half did not meet the minimum recommendations for physical activity, and a third reported current nicotine use. The study findings also highlighted disparities in sleep and mental health and financial hardship associated with living with HIV. The descriptive findings of this quantitative study have been detailed elsewhere ( ). 3.1.3.2 Qualitative assessment The following are the results of the qualitative data analysis. 3.1.3.2.1 Participant characteristics Among the 30 community members who participated in this study, the mean age was 47.5 years (SD = 12.5), and the mean duration since HIV diagnosis was 17.2 years (SD = 11.1, range 1–41). All participants ( N = 30) reported having health insurance and access to care, with 97% ( n = 29) having a regular provider and being on antiretroviral therapy. Participants reported being out of the closet for an average of 25.7 years (SD = 14.4). The majority of participants preferred the gender pronouns “he/him” (97%, n = 29), while one participant (3%) preferred “she/her.” For race and ethnicity, we documented their responses verbatim as participants identified themselves, adhering to the gold standard of self-identification for reporting these demographics ( ). Regarding ethnic background, 70% of participants ( n = 21) self-identified as Latinx. While Latinx ethnicity refers to having heritage from Latin America and the Caribbean, regardless of race, Haitian participants in our study did not self-identify as Latinx but Black, despite Haiti being part of Latin America. This distinction may be associated with Haiti’s unique history and culture, which are rooted in African descent, and its primary language, Haitian Creole. Further demographic information is presented in . 3.1.3.2.2 Thematic analysis Using inductive coding scheme, we identified nine major themes: (1) perceptions of health, (2) current and anticipated health concerns, (3) behaviors and regimens that improve health and well-being, (4) encounters with medication, (5) social encounters with in-groups and out-groups, (6) desired delivery of health education, (7) comfort in using technology and accessibility, (8) ways to nurture engagement, and (9) nurturing a safe space among users in technology-based behavioral interventions in Black and Latinx sexual minority men with HIV. Cohen’s Kappa coefficient indicated perfect intercoder agreement ( κ = 0.95), based on Viera and Garrett’s criteria for interpreting the kappa statistic ( ). The themes and subthemes are described below, along with supporting quotes. Pseudonyms were used to safeguard the identities of participants. If a participant is referred to as “he/his/him” in the quotes below, it indicates that a translator has conveyed the participants’ words into English. Theme 1: Perceptions of health- This theme focused on the overall perception of health in living with HIV. This included describing one’s health status, control over health, and perceptions of aging as subthemes. 3.1.3.2.2.1 Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54). 3.1.3.2.2.2 Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62). 3.1.3.2.2.3 Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community. 3.1.3.2.2.4 Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health. 3.1.3.2.2.5 Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities. 3.1.3.2.2.6 Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41). 3.1.3.2.2.7 Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65). 3.1.3.2.2.8 Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30). 3.1.3.2.2.9 Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30). 3.1.3.2.2.10 Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30). 3.1.3.2.2.11 Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis. 3.1.3.2.2.12 Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55). 3.1.3.2.2.13 Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing. 3.1.3.2.2.14 Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55). 3.1.3.2.2.15 Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality. 3.1.3.2.2.16 Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30). 3.1.3.2.2.17 Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55). 3.1.3.2.2.18 Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation. 3.1.3.2.2.19 Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39). 3.1.3.2.2.20 Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39). 3.1.3.2.2.21 Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46). 3.1.3.2.2.22 Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39). 3.1.3.2.2.23 Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47). 3.1.3.2.2.24 Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30). 3.1.3.2.2.25 User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors. 3.1.3.2.2.26 Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61). 3.1.3.2.2.27 Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47). 3.2 Step 2: logic model of change and matrix of objectives Based on the identified problems and needs in Step 1, we developed a logic model of change that outlines the expected program outcomes and their determinants (see ). In this model, outcomes are categorized into distal and proximal, reflecting the overarching goals of CVD prevention and CVH promotion through a technology-based behavioral intervention. Distal outcomes, which represent the primary goals of the intervention, include CVH-related physiological and psychological measures, such as blood pressure, total serum cholesterol, hemoglobin A1c, Body Mass Index (BMI), and depression severity. Proximal outcomes consist of specific behaviors crucial to achieving these goals: informed decision-making, CVH-promoting behaviors, self-management and symptom control, health care access and medical adherence, and social support. These proximal outcomes are directly influenced by key environmental and behavioral determinants, including knowledge, belief, medical distrust, stigma and discrimination, and culture. To achieve the desired outcomes, we established POs at the behavioral level. For each determinant, we identified specific COs that align with the corresponding POs, detailing the actions necessary to drive these changes (see ). This structured approach ensures that each determinant is addressed systematically to promote the intended health outcomes. 3.3 Step 3: theory-based methods Diffusion of Innovations theory ( ) was selected as a conceptual framework for this study. This theory explores how “new ideas, practices, and technologies” become more familiar and widely adopted within society. It encompasses five key components: (1) innovation attributes—the features of the innovation that influence its adoption; (2) adopter innovativeness—the characteristics and willingness of individuals to embrace new ideas; (3) social system and opinion leaders—the structure and influential figures who can shape attitudes and behaviors; (4) adoption process—the stages an individual goes through when adopting the innovation; and (5) diffusion system—change agency/agents and their methods of promoting the innovation within the social system ( ). This theory has been frequently used in health intervention research, including studies involving sexually, racially, and ethnically minoritized men and those living with HIV ( , ). Given that this study focused on the adoption of innovative health behaviors through a technology-based intervention for CVD prevention, the Diffusion of Innovations theory was well-suited to guide the research. In developing this intervention, which targeted Black and Latinx sexual minority men living with HIV, we also incorporated the Social Determinants of Health Framework as applied to racial and ethnic disparities in CVD outcomes ( ). This framework examines how various social, economic, and environmental factors contribute to CVH inequities, highlighting the considerable impact of structural racism and discrimination as key drivers of these disparities. Given our focus on a population from sexually minoritized and historically disadvantaged racial and ethnic communities, the Social Determinants of Health Framework provided a strong foundation for the research. 3.3.1 Practical strategies The practical strategies for this protocol were developed using the Intervention Mapping framework, emphasizing culturally tailored, digital tools like avatar-led videos and virtual environments ( ). These tools were designed to address specific barriers faced by Black and Latinx sexual minority men with HIV, such as medical distrust and stigma ( ). Additionally, the virtual environment behavioral intervention was premised on recommendations for CVH. The American Heart Association created Life’s Essential 8, a set of key health metrics for promoting CVH. These metrics include: (1) maintaining a heart-healthy diet, (2) engaging in physical activity (at least 150 min of moderate-intensity aerobic activity or 75 min of vigorous activity per week), (3) eliminating nicotine exposure (smoking and secondhand smoke), (4) prioritizing sleep health (7–9 h of quality sleep per night for adults), (5) achieving and maintaining a healthy body weight (BMI between 18.5–24.9), (6) managing cholesterol levels (low-density lipoprotein, high-density lipoprotein, and triglycerides), (7) controlling blood glucose (fasting blood glucose under 100 mg/dL or HbA1c less than 5.7%), and (8) maintaining optimal blood pressure (less than 120/80 mmHg) ( , ). Recently, the American Heart Association published stroke prevention guidelines which addressed the importance of risk assessment in transgender women ( ). The expansion of recommendations addressing underrepresented populations is advantageous toward inclusivity and better health for all. When designing interventions, grounding programs in practical strategies could facilitate the uptake and adoption of heart health behaviors and ensure that health promotion is both accessible and relevant to a community’s unique cultural and social needs ( ). Moreover, valuing the lived experiences of the target community, respecting and incorporating cultural values, and prioritizing the voices of the community in shaping behavioral interventions enhance the promise of achieving optimal health ( ). When seeking to conduct research with ethnic and racial communities, investigators should acknowledge their social positioning, such as being someone who may or may not share the same community or lived experiences as their sample population. Acknowledging positionality is necessary to foster trust, ensure the ethical conduct of research, and make research outcomes relevant and beneficial for the communities involved ( , ).
Step 1. Logic model of the problem 3.1.1 Literature review We conducted a scoping review on behavioral interventions for CVD prevention among adults living with HIV ( ). It highlighted a growing emphasis on non-pharmacological, multicomponent approaches addressing lifestyle CVD risk factors, such as physical activity, diet, and weight management. Most US studies focused on the Southeast, which suggested that future research should extend to cover geographic regions that have been underrepresented and include a more comprehensive range of populations at elevated CVD risk. Details of the full review can be found elsewhere ( ). 3.1.2 Framework development We have presented an innovative eHealth technology framework to shift the existing paradigm of medical distrust among sexual minority men of color in a stepwise and multi-construct approach ( ). Our framework was developed in multidisciplinary collaboration with leaders in nursing, public health, and bioethics. The framework illustrates how eHealth interventions encourage engagement through the adoption and use of technology, anonymity, co-presence, self-disclosure, and social support to foster trustworthiness and trust in healthcare. We proposed the use of two eHealth modalities: (1) a virtual environment and (2) avatar-led videos (i.e., computer-generated, three-dimensional online spaces and human-like digital representations). These technologies provide private, interactive platforms that empower individuals and improve access to reliable health information, thereby promoting health behaviors in sexual minority men from racial and ethnic minority communities with chronic conditions. 3.1.3 Local needs assessment 3.1.3.1 Quantitative assessment Quantitative assessment using validated survey measures revealed that most participants perceived their conditions as manageable yet serious and reported that the associated symptoms were complex. More than half did not meet the minimum recommendations for physical activity, and a third reported current nicotine use. The study findings also highlighted disparities in sleep and mental health and financial hardship associated with living with HIV. The descriptive findings of this quantitative study have been detailed elsewhere ( ). 3.1.3.2 Qualitative assessment The following are the results of the qualitative data analysis. 3.1.3.2.1 Participant characteristics Among the 30 community members who participated in this study, the mean age was 47.5 years (SD = 12.5), and the mean duration since HIV diagnosis was 17.2 years (SD = 11.1, range 1–41). All participants ( N = 30) reported having health insurance and access to care, with 97% ( n = 29) having a regular provider and being on antiretroviral therapy. Participants reported being out of the closet for an average of 25.7 years (SD = 14.4). The majority of participants preferred the gender pronouns “he/him” (97%, n = 29), while one participant (3%) preferred “she/her.” For race and ethnicity, we documented their responses verbatim as participants identified themselves, adhering to the gold standard of self-identification for reporting these demographics ( ). Regarding ethnic background, 70% of participants ( n = 21) self-identified as Latinx. While Latinx ethnicity refers to having heritage from Latin America and the Caribbean, regardless of race, Haitian participants in our study did not self-identify as Latinx but Black, despite Haiti being part of Latin America. This distinction may be associated with Haiti’s unique history and culture, which are rooted in African descent, and its primary language, Haitian Creole. Further demographic information is presented in . 3.1.3.2.2 Thematic analysis Using inductive coding scheme, we identified nine major themes: (1) perceptions of health, (2) current and anticipated health concerns, (3) behaviors and regimens that improve health and well-being, (4) encounters with medication, (5) social encounters with in-groups and out-groups, (6) desired delivery of health education, (7) comfort in using technology and accessibility, (8) ways to nurture engagement, and (9) nurturing a safe space among users in technology-based behavioral interventions in Black and Latinx sexual minority men with HIV. Cohen’s Kappa coefficient indicated perfect intercoder agreement ( κ = 0.95), based on Viera and Garrett’s criteria for interpreting the kappa statistic ( ). The themes and subthemes are described below, along with supporting quotes. Pseudonyms were used to safeguard the identities of participants. If a participant is referred to as “he/his/him” in the quotes below, it indicates that a translator has conveyed the participants’ words into English. Theme 1: Perceptions of health- This theme focused on the overall perception of health in living with HIV. This included describing one’s health status, control over health, and perceptions of aging as subthemes. 3.1.3.2.2.1 Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54). 3.1.3.2.2.2 Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62). 3.1.3.2.2.3 Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community. 3.1.3.2.2.4 Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health. 3.1.3.2.2.5 Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities. 3.1.3.2.2.6 Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41). 3.1.3.2.2.7 Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65). 3.1.3.2.2.8 Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30). 3.1.3.2.2.9 Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30). 3.1.3.2.2.10 Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30). 3.1.3.2.2.11 Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis. 3.1.3.2.2.12 Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55). 3.1.3.2.2.13 Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing. 3.1.3.2.2.14 Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55). 3.1.3.2.2.15 Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality. 3.1.3.2.2.16 Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30). 3.1.3.2.2.17 Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55). 3.1.3.2.2.18 Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation. 3.1.3.2.2.19 Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39). 3.1.3.2.2.20 Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39). 3.1.3.2.2.21 Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46). 3.1.3.2.2.22 Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39). 3.1.3.2.2.23 Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47). 3.1.3.2.2.24 Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30). 3.1.3.2.2.25 User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors. 3.1.3.2.2.26 Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61). 3.1.3.2.2.27 Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47).
Literature review We conducted a scoping review on behavioral interventions for CVD prevention among adults living with HIV ( ). It highlighted a growing emphasis on non-pharmacological, multicomponent approaches addressing lifestyle CVD risk factors, such as physical activity, diet, and weight management. Most US studies focused on the Southeast, which suggested that future research should extend to cover geographic regions that have been underrepresented and include a more comprehensive range of populations at elevated CVD risk. Details of the full review can be found elsewhere ( ).
Framework development We have presented an innovative eHealth technology framework to shift the existing paradigm of medical distrust among sexual minority men of color in a stepwise and multi-construct approach ( ). Our framework was developed in multidisciplinary collaboration with leaders in nursing, public health, and bioethics. The framework illustrates how eHealth interventions encourage engagement through the adoption and use of technology, anonymity, co-presence, self-disclosure, and social support to foster trustworthiness and trust in healthcare. We proposed the use of two eHealth modalities: (1) a virtual environment and (2) avatar-led videos (i.e., computer-generated, three-dimensional online spaces and human-like digital representations). These technologies provide private, interactive platforms that empower individuals and improve access to reliable health information, thereby promoting health behaviors in sexual minority men from racial and ethnic minority communities with chronic conditions.
Local needs assessment 3.1.3.1 Quantitative assessment Quantitative assessment using validated survey measures revealed that most participants perceived their conditions as manageable yet serious and reported that the associated symptoms were complex. More than half did not meet the minimum recommendations for physical activity, and a third reported current nicotine use. The study findings also highlighted disparities in sleep and mental health and financial hardship associated with living with HIV. The descriptive findings of this quantitative study have been detailed elsewhere ( ). 3.1.3.2 Qualitative assessment The following are the results of the qualitative data analysis. 3.1.3.2.1 Participant characteristics Among the 30 community members who participated in this study, the mean age was 47.5 years (SD = 12.5), and the mean duration since HIV diagnosis was 17.2 years (SD = 11.1, range 1–41). All participants ( N = 30) reported having health insurance and access to care, with 97% ( n = 29) having a regular provider and being on antiretroviral therapy. Participants reported being out of the closet for an average of 25.7 years (SD = 14.4). The majority of participants preferred the gender pronouns “he/him” (97%, n = 29), while one participant (3%) preferred “she/her.” For race and ethnicity, we documented their responses verbatim as participants identified themselves, adhering to the gold standard of self-identification for reporting these demographics ( ). Regarding ethnic background, 70% of participants ( n = 21) self-identified as Latinx. While Latinx ethnicity refers to having heritage from Latin America and the Caribbean, regardless of race, Haitian participants in our study did not self-identify as Latinx but Black, despite Haiti being part of Latin America. This distinction may be associated with Haiti’s unique history and culture, which are rooted in African descent, and its primary language, Haitian Creole. Further demographic information is presented in . 3.1.3.2.2 Thematic analysis Using inductive coding scheme, we identified nine major themes: (1) perceptions of health, (2) current and anticipated health concerns, (3) behaviors and regimens that improve health and well-being, (4) encounters with medication, (5) social encounters with in-groups and out-groups, (6) desired delivery of health education, (7) comfort in using technology and accessibility, (8) ways to nurture engagement, and (9) nurturing a safe space among users in technology-based behavioral interventions in Black and Latinx sexual minority men with HIV. Cohen’s Kappa coefficient indicated perfect intercoder agreement ( κ = 0.95), based on Viera and Garrett’s criteria for interpreting the kappa statistic ( ). The themes and subthemes are described below, along with supporting quotes. Pseudonyms were used to safeguard the identities of participants. If a participant is referred to as “he/his/him” in the quotes below, it indicates that a translator has conveyed the participants’ words into English. Theme 1: Perceptions of health- This theme focused on the overall perception of health in living with HIV. This included describing one’s health status, control over health, and perceptions of aging as subthemes. 3.1.3.2.2.1 Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54). 3.1.3.2.2.2 Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62). 3.1.3.2.2.3 Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community. 3.1.3.2.2.4 Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health. 3.1.3.2.2.5 Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities. 3.1.3.2.2.6 Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41). 3.1.3.2.2.7 Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65). 3.1.3.2.2.8 Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30). 3.1.3.2.2.9 Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30). 3.1.3.2.2.10 Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30). 3.1.3.2.2.11 Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis. 3.1.3.2.2.12 Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55). 3.1.3.2.2.13 Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing. 3.1.3.2.2.14 Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55). 3.1.3.2.2.15 Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality. 3.1.3.2.2.16 Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30). 3.1.3.2.2.17 Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55). 3.1.3.2.2.18 Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation. 3.1.3.2.2.19 Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39). 3.1.3.2.2.20 Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39). 3.1.3.2.2.21 Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46). 3.1.3.2.2.22 Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39). 3.1.3.2.2.23 Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47). 3.1.3.2.2.24 Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30). 3.1.3.2.2.25 User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors. 3.1.3.2.2.26 Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61). 3.1.3.2.2.27 Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47).
Quantitative assessment Quantitative assessment using validated survey measures revealed that most participants perceived their conditions as manageable yet serious and reported that the associated symptoms were complex. More than half did not meet the minimum recommendations for physical activity, and a third reported current nicotine use. The study findings also highlighted disparities in sleep and mental health and financial hardship associated with living with HIV. The descriptive findings of this quantitative study have been detailed elsewhere ( ).
Qualitative assessment The following are the results of the qualitative data analysis. 3.1.3.2.1 Participant characteristics Among the 30 community members who participated in this study, the mean age was 47.5 years (SD = 12.5), and the mean duration since HIV diagnosis was 17.2 years (SD = 11.1, range 1–41). All participants ( N = 30) reported having health insurance and access to care, with 97% ( n = 29) having a regular provider and being on antiretroviral therapy. Participants reported being out of the closet for an average of 25.7 years (SD = 14.4). The majority of participants preferred the gender pronouns “he/him” (97%, n = 29), while one participant (3%) preferred “she/her.” For race and ethnicity, we documented their responses verbatim as participants identified themselves, adhering to the gold standard of self-identification for reporting these demographics ( ). Regarding ethnic background, 70% of participants ( n = 21) self-identified as Latinx. While Latinx ethnicity refers to having heritage from Latin America and the Caribbean, regardless of race, Haitian participants in our study did not self-identify as Latinx but Black, despite Haiti being part of Latin America. This distinction may be associated with Haiti’s unique history and culture, which are rooted in African descent, and its primary language, Haitian Creole. Further demographic information is presented in . 3.1.3.2.2 Thematic analysis Using inductive coding scheme, we identified nine major themes: (1) perceptions of health, (2) current and anticipated health concerns, (3) behaviors and regimens that improve health and well-being, (4) encounters with medication, (5) social encounters with in-groups and out-groups, (6) desired delivery of health education, (7) comfort in using technology and accessibility, (8) ways to nurture engagement, and (9) nurturing a safe space among users in technology-based behavioral interventions in Black and Latinx sexual minority men with HIV. Cohen’s Kappa coefficient indicated perfect intercoder agreement ( κ = 0.95), based on Viera and Garrett’s criteria for interpreting the kappa statistic ( ). The themes and subthemes are described below, along with supporting quotes. Pseudonyms were used to safeguard the identities of participants. If a participant is referred to as “he/his/him” in the quotes below, it indicates that a translator has conveyed the participants’ words into English. Theme 1: Perceptions of health- This theme focused on the overall perception of health in living with HIV. This included describing one’s health status, control over health, and perceptions of aging as subthemes. 3.1.3.2.2.1 Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54). 3.1.3.2.2.2 Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62). 3.1.3.2.2.3 Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community. 3.1.3.2.2.4 Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health. 3.1.3.2.2.5 Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities. 3.1.3.2.2.6 Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41). 3.1.3.2.2.7 Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65). 3.1.3.2.2.8 Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30). 3.1.3.2.2.9 Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30). 3.1.3.2.2.10 Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30). 3.1.3.2.2.11 Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis. 3.1.3.2.2.12 Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55). 3.1.3.2.2.13 Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing. 3.1.3.2.2.14 Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55). 3.1.3.2.2.15 Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality. 3.1.3.2.2.16 Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30). 3.1.3.2.2.17 Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55). 3.1.3.2.2.18 Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation. 3.1.3.2.2.19 Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39). 3.1.3.2.2.20 Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39). 3.1.3.2.2.21 Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46). 3.1.3.2.2.22 Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39). 3.1.3.2.2.23 Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47). 3.1.3.2.2.24 Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30). 3.1.3.2.2.25 User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors. 3.1.3.2.2.26 Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61). 3.1.3.2.2.27 Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47).
Participant characteristics Among the 30 community members who participated in this study, the mean age was 47.5 years (SD = 12.5), and the mean duration since HIV diagnosis was 17.2 years (SD = 11.1, range 1–41). All participants ( N = 30) reported having health insurance and access to care, with 97% ( n = 29) having a regular provider and being on antiretroviral therapy. Participants reported being out of the closet for an average of 25.7 years (SD = 14.4). The majority of participants preferred the gender pronouns “he/him” (97%, n = 29), while one participant (3%) preferred “she/her.” For race and ethnicity, we documented their responses verbatim as participants identified themselves, adhering to the gold standard of self-identification for reporting these demographics ( ). Regarding ethnic background, 70% of participants ( n = 21) self-identified as Latinx. While Latinx ethnicity refers to having heritage from Latin America and the Caribbean, regardless of race, Haitian participants in our study did not self-identify as Latinx but Black, despite Haiti being part of Latin America. This distinction may be associated with Haiti’s unique history and culture, which are rooted in African descent, and its primary language, Haitian Creole. Further demographic information is presented in .
Thematic analysis Using inductive coding scheme, we identified nine major themes: (1) perceptions of health, (2) current and anticipated health concerns, (3) behaviors and regimens that improve health and well-being, (4) encounters with medication, (5) social encounters with in-groups and out-groups, (6) desired delivery of health education, (7) comfort in using technology and accessibility, (8) ways to nurture engagement, and (9) nurturing a safe space among users in technology-based behavioral interventions in Black and Latinx sexual minority men with HIV. Cohen’s Kappa coefficient indicated perfect intercoder agreement ( κ = 0.95), based on Viera and Garrett’s criteria for interpreting the kappa statistic ( ). The themes and subthemes are described below, along with supporting quotes. Pseudonyms were used to safeguard the identities of participants. If a participant is referred to as “he/his/him” in the quotes below, it indicates that a translator has conveyed the participants’ words into English. Theme 1: Perceptions of health- This theme focused on the overall perception of health in living with HIV. This included describing one’s health status, control over health, and perceptions of aging as subthemes. 3.1.3.2.2.1 Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54). 3.1.3.2.2.2 Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62). 3.1.3.2.2.3 Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community. 3.1.3.2.2.4 Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health. 3.1.3.2.2.5 Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities. 3.1.3.2.2.6 Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41). 3.1.3.2.2.7 Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65). 3.1.3.2.2.8 Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30). 3.1.3.2.2.9 Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30). 3.1.3.2.2.10 Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30). 3.1.3.2.2.11 Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis. 3.1.3.2.2.12 Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55). 3.1.3.2.2.13 Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing. 3.1.3.2.2.14 Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55). 3.1.3.2.2.15 Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality. 3.1.3.2.2.16 Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30). 3.1.3.2.2.17 Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55). 3.1.3.2.2.18 Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation. 3.1.3.2.2.19 Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39). 3.1.3.2.2.20 Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39). 3.1.3.2.2.21 Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46). 3.1.3.2.2.22 Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39). 3.1.3.2.2.23 Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47). 3.1.3.2.2.24 Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30). 3.1.3.2.2.25 User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors. 3.1.3.2.2.26 Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61). 3.1.3.2.2.27 Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47).
Describing one’s health status The interviewees were asked to rate their current health. Responses ranged from unhealthy/negative through average/neutral to healthy/positive. Participants who perceived themselves as healthy described their health as “very well,” “fine,” “pretty good,” “strong and solid,” “perfect,” “super-blessed,” “completely cool,” or “free,” with some rating their health status numerically (e.g., 10 out of 10). Factors associated with positive health perceptions included regular “medical checkups,” receiving treatment and medication, not “getting sick,” not having “too many health conditions” or “any pain,” and disclosing their condition. They felt healthy when they could “work,” “be able,” and live a “normal” life, such as “going out to do [one’s] errands,” “traveling,” or “just with a little extra precaution.” Some participants evaluated their health positively when their conditions improved compared to their baseline condition or when test results, such as CD4 cell count, showed improvement. “I have already the treatment. I also I’m open about my condition with my friends. I do not have nothing right now that is bothering me like that. I have a good doctor. So I feel that my life is good right now, and I feel healthy.” (Ellie, age 48). In the average/neutral category, participants described their health status as “regular,” “fair,” “average,” “up to par,” “50–50,” and “in the middle.” Underlying conditions such as HIV and other comorbidities, uncertainty about the causes of their illness and symptoms, and the burden of taking multiple medications and dealing with their side effects prevented them from perceiving themselves as fully healthy. “Well, in relation to my HIV, I believe it’s really good. I mean everything is under control. But I have underlying conditions, which cause distraction in my health, so that’s why I rated myself fair.” (Cheo, age 55). Participants who perceived their health status as negative described managing their health as “stressful,” “very hard,” “very difficult,” and “not easy” due to HIV and comorbidities, along with a lack of “possibilities” or availability of treatment and medications. They mentioned coping mechanisms such as “denial,” ignorance, “crying,” and being “isolated” in reaction to their HIV diagnosis and reported feeling lonely, irritable, cranky, tired, depressed, and afraid. “Some days, I wake up being depressed. It has not been easy.” (Yoga, age 65). “Because you know I have this problem with high [blood] pressure … and sometimes that I can feel a little bad for that.” (Jesus, age 54).
Control over health The subtheme of control over health explored participants’ perceptions of how they could control their own health. Participants mentioned they could “control their own body” and “illness.” They also mentioned that their “lifestyle choices” are responsible for their health status and that it is “up to” themselves to “make well-informed decisions.” They perceived the importance of “making changes” and “taking care of [themselves]” to “manage” and “improve” their health. “The high blood pressure, I do believe that some like of my lifestyle choices I think is what led me to developing it. So, it is important that I kind of like have been able to manage it with like medicine and stuff.” (Xander, age 33). “Your energy, your strength, and your mentality controls your illness in your body.” (Bunny, age 32). “I always say; I believe HIV lives with me. I have control of what I eat, what I do to take care of myself.” (Manuel, age 62).
Perceptions of aging Regarding aging, participants acknowledged physiological decline and reduced functionality. They mentioned experiencing or anticipating health problems they are not overly concerned about, noting that their bodies are “not like when [they] were younger.” They also discussed reduced physical activities, metabolism, and social life. Specific concerns associated with aging included physical illnesses and disabilities, such as “stiff joints” and “walking with a cane,” as well as mental issues like “loss of memory” or Alzheimer’s disease. Despite these concerns, a promising outlook on longevity while living with HIV was expressed. They believed they could still engage in health-promoting activities as they age, such as exercising at an appropriate intensity instead of “vigorous” physical activity and finding a balance between alone time and socializing. “Because once you grow up, you can get sick. And your health is not the same. Your body’s not the same. Your body changes.” (Atlantic, age 47). (Translator response) “But you know, when you have age and your elderly, you cannot do it as much.” (Roseman, age 62). Theme 2: Current and anticipated health concerns- This theme explored the health concerns that participants were experiencing and those they worried about facing in the future. Participants expressed significant concerns about chronic, long-term health conditions. When discussing the potential sources of these concerns, they frequently referenced their family’s heredity, family medical history, and observations within their community.
Current health concerns While participants reported a variety of current health concerns, they largely expressed significant worries about chronic CVD, including diabetes, high blood pressure, high cholesterol, and heart disease. Other chronic conditions mentioned included gastrointestinal issues (e.g., cirrhosis, stomach ulcers), neurological conditions (e.g., seizure disorder), pulmonary diseases (e.g., breathing problems, asthma), auditory concerns (e.g., chronic tinnitus), and conditions possibly related to chronic inflammation (e.g., joint pain, carpal tunnel syndrome, plantar fasciitis). Participants also expressed concern about mental health conditions, such as post-traumatic stress disorder, depression, and anxiety, which they perceived as being associated with their HIV diagnosis and medication. Beyond chronic diseases, participants reported lifestyle-related health concerns such as overweight and sleep problems (e.g., difficulty falling asleep, obesity-induced sleep apnea). Infectious diseases, including influenza and SARS-CoV-2 infection (COVID-19), were also mentioned. Participants described these conditions as “cumbersome,” noting that they interfered with leading a normal life, including regular activities and diet. Managing these conditions often required significant lifestyle changes to meet medical recommendations and guidelines. While some participants acknowledged that their ‘lifestyle choices led [them] to developing’ these chronic conditions, others expressed uncertainty about “what’s causing what.” “I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Well, my main concern is diabetes, to be honest with you. It’s one of the most challenging things that I’ve ever had to go through. It puts everything else on the backburner as far as my focus, which is on diabetes type 2. It’s really difficult to manage. You have to make drastic live-changes [sic] and diet changes.” (Cheo, age 55). Some participants reported having no current health concerns when their HIV-related symptoms were well controlled with medication, they had no chronic conditions or other illnesses, and their vital signs and laboratory results (e.g., blood pressure, CD4 cell counts) were well managed. They perceived themselves as free of major issues, feeling empowered to “make well-informed decisions” about their health.
Future health concerns Participants reported a range of anticipated health concerns, even though they did not exhibit related symptoms at the time. High blood pressure, diabetes, and heart attacks were highlighted as “really big problems.” They observed their immediate family members (e.g., grandparents, parents), relatives (e.g., aunts), and friends suffering from these conditions and had experienced losses as a result. Participants expressed concern about potential complications, such as diabetes-related blindness, limb loss, and limited mobility. Heart attacks were perceived as particularly serious and as conditions that could unexpectedly affect people, even young individuals in their 30s. Stroke was identified as a common health concern among transgender individuals due to the risk of blood clots as a side effect of hormonal therapy. Cancer, particularly colon cancer, was noted as a higher risk for racially and ethnically minoritized groups. Participants also worried about the exacerbation of symptoms (e.g., worsening tinnitus leading to deafness) and the sudden onset of underlying conditions (e.g., seizures), even if these were currently controlled. Additionally, there was a fear of death related to HIV and concerns about mental health issues and age-related conditions, such as memory loss, Alzheimer’s disease, stiff joints, and resulting disability. Managing these potential health issues was seen as requiring “extra effort in addition to just living with HIV and AIDS,” prompting participants to seek regular screenings and medical consultations with healthcare providers. “My grandmother is actually blind in one eye now due to diabetes. I’ve had some of my aunts lose limbs. … That stuff can get really serious. Diabetes is serious. People do not take it serious. It really is a serious disease. It’s more serious than they take it, to me.” (James, age 35). Theme 3: Behaviors and regimens that improve health and well-being- This theme explored the health maintenance activities that interviewees participate in or wish to adopt to maintain and improve their well-being. This encompassed physical activity, a healthy diet, medical interventions and health education, mental health support, social support, and various other activities.
Promoting physical activity When prompted to think about their physical activity, interviewees recalled activities such as “exercise,” “going to the gym more,” “walking a lot,” and “aerobic or cardio.” Physical activity levels varied due to age or comorbid health conditions. Performing physical activities was bolstered by participating in them alongside peers or incorporating them into daily routines, including daily commutes, grocery shopping, and watching television. “I walk a lot. I try to, if I can walk, I try not to take a bus or a train if it’s within a good walking distance about half the time. Also, I do other stuff like I kayak off the Hudson and stuff like that.” (Jay, age 30). “I walk a lot and…walk with some friends or some person; I feel ready and excited, good. And when I go to the gym, I find some person I know that I can do…when I go, sincerely, when I go to the gym, I’m doing more cardio, walking or cycling, that and other activities.” (Pedro, age 41).
Dietary changes and conscious eating habits Regarding diet, participants recounted the conscious changes they made in efforts to improve their health. Common techniques included exchanging sugar-sweetened beverages with water and limiting consumption of unhealthy and high-carbohydrate foods to “sometimes” or “one day per month.” Participants mentioned seeking information about nutrition from experts, peers, and media channels such as “the cooking channel.” Additionally, some participants mentioned how cultural background influenced their dietary decisions. “Before, I used to not care. And I’d eat a lot of fried stuff, and a lot of rice and pasta and all that stuff. But now everything is moderate with me.” (Cindy, age 55). “If [my doctors] say to drink a lot of water, I drink a lot of water. If they say eat healthy, I’m trying to eat healthy. I eat chicken breasts, salmon, white rice, quinoa, vegetables.” (BMW, age 63). (Translator response) “In the Haitian culture it’s a lot more home cooked meals than outside food. Like McDonald’s is considered junk food. McDonald’s is not…yes, we do not eat McDonald’s like that. We like home cooked meals – rice, beans, plants, and salads.” (Eddy, age 65).
Medical interventions and learning about health This subtheme explored the ways in which community members sought to manage their health and gain information about current medications and clinical treatments to “live with HIV” and comorbid conditions. They regularly met with doctors for activities such as “to get [their] heart check on” and “colon screens” and “to follow all the things my doctor orders.” Participants consulted various sources, including professionals, such as nutritionists and therapists, and online videos. However, they expressed a specific desire to learn health information from medical providers. Learning about their HIV diagnosis and how to cope with “the virus” was described as “calming” and allowed them to feel “much better.” Participants also used preventive measures, such as vaccinations and aspirin, to protect against future illness and proactively sought after information for diseases that they could potentially encounter in the future. (Translator response) “He said the best answer is that you take your medication on time and you do whatever that is prescribed, like as your doctors recommend.” (Eddy, age 65). “It’s like they are coming out with different medications for HIV. They came out with Descovy. They came out with so many of them. So what I do is I, sometimes, I do my research. And I look online, YouTube or videos. I really find out certain information about it. Like for me to really hear somebody, like a medical provider who knows more than we do, that would be perfect, too.” (Rob, age 30).
Mental health support Participants navigated concerns about mental health using various techniques. Stress from HIV diagnosis and other life circumstances manifested through stress eating, panic attacks, and depression. Participants lessened their mental burden by “socializing and connecting” with peers and family who shared similar health experiences. Outside of these interpersonal relationships, they also practiced meditation, scheduled “quiet time,” and attended therapy. In pursuit of a more relaxed lifestyle, participants also reframed their thoughts, such as having their minds focus “on other things” and “not paying to attention to things that cannot affect me.” “So one of the things that has helped me all my life with whatever, you name it, depression or this condition or whatever, is socializing and connecting to other people that are in the same position as I am.” (Xavier, age 39). “It’s always good to talk about it. The more you hold it in, the more you feel like I’m not comfortable, I do not want to express what I have. The best option I have is express your thoughts about it. Do not hold it in.” (Rob, age 30).
Social support The subtheme of social support explored how participants leveraged their social relationships to enhance their health. Numerous participants “relied” on friends, peers living with HIV, and “positive people” to motivate their health journey in areas such as physical activity and mental wellness. These supporters offered encouraging advice such as “take 1 day at a time” and “just stay on that right path.” Their straightforward activity guidance, such as “Do the exercise. Drink a lot of water. Walk for 30 min every day,” was also beneficial in helping participants maintain their health regimens. “… a support group was beneficial for me. And meeting more people living with this condition helped me a lot.” (Xavier, age 39). “As you do all these activities and all these actions, it makes your whole body feel better, makes you do more activities with my friends and with other people, other good role models who are there, who support me” (Rob, age 30).
Miscellaneous health and wellness practices Participants also shared other, miscellaneous health activities they performed. They understood the detrimental impact of alcohol consumption and smoking on their health, although some admitted challenges with smoking cessation. Seeking clean air, maintaining a healthy weight, and getting sufficient sleep were seen as positive actions for well-being. “My asthma is always on. It always ran through my genes. But for some reason, I still smoke. And my sisters and my baby mothers and my cousins, they do not like that about me.” (Bunny, age 32). Theme 4: Encounters with medication- This theme described participants’ motivations and experiences during adherence or non-adherence to medication regimens. Challenges to medication adherence included “complicated” prescription regimens, uncomfortable side effects, and denial of HIV diagnosis.
Benefits and effectiveness Participants adhered to medications when they saw them as a path to return to a “normal life.” Preventive medications were viewed as powerful in that a regular regimen of just a single medication could prevent “drastic” health effects for HIV or other chronic conditions. Although adhering to a strict schedule was sometimes challenging, they had positive thoughts about staying on the medication.” Participants acknowledged that the progression of medication development and access had improved over time. “Nobody dies in this day with HIV. It’s one medication.” (Atlantic, age 47). “I take even aspirins every day to prevent a stroke… I feel deep down in my heart that I’m not going stop ever taking aspirins. And I even tell my mother. She’s almost 80 years old. Take an aspirin every day. Because just with one little small pill could just prevent something so drastic. But today, honestly, I can say it’s just going just fine. Because now, those combinations of two and three pills just in one medication.” (Cindy, age 55).
Side effects and concerns Participants described non-adherence due to deleterious side effects of medications that caused somatic symptoms such as diarrhea, acid reflux, and weight gain or resulted in psychological symptoms such as depression. The need to take several medications could also contribute to depression. When taking multiple medications at once or having a comorbid condition, participants found it challenging to determine whether discomfort stemmed from a chronic condition or the medication itself. “I feel super-blessed, super-blessed because I do not have to take so many pills and have different mood swings on the behalf of my medicines. One day, I was getting nauseous. Some days, I felt like I have diarrhea. And sometimes, I did not have an appetite. There was weight loss. It was very discomfort.” (Cindy, age 55). “… sometimes I feel like a little depression, because you know I need to take this medicine every day for all of my life.” (Jesus, age 54). “Sometimes I think I have some side effects from the medications, and like I have high blood pressure too, so that can be like, you know, some stuff that I can never really figure out like what’s causing what.” (Jay, age 30). Theme 5: Social encounters with in-groups and out-groups- This theme focused on participants’ interactions and relationships with both their peers from the community (i.e., living with HIV and having sexual, racially, and ethnically minoritized backgrounds) and individuals outside of it. Some interviewees described themselves as “a people person,” while others were more introverted. Peer relationships were usually positive, whereas interaction with out-group members varied from healing to stigmatizing.
Peer interactions Participants expressed that “meeting more people with this condition” helped them “a lot.” They also took on roles to educate and “advocate” for peers, helping them learn about HIV, chronic condition prevention (e.g., cancer), and “new information” in health. “Because even someone that actually was confessing to me that; how do you get this? And I explained it to them. And I like to advocate for my fellow peers, and even for myself.” (Cindy, age 55).
Interactions with others outside the community This subtheme explored how participants navigated social interactions outside of their community. They spoke about chronic health conditions with family members or sought information from live resources. Participants noted that interactions with those outside their community could stigmatize sexual minority men with HIV due to a lack of knowledge among the general population. Some suggested that this could be resolved through greater educational outreach about HIV. “I’m a people person. Like if I was wanted like hardcore information and stuff, I’d be more comfortable in going to like my doctor, or like a community health center or something if they had like groups or something. Like I like to see people and hear about people’s experiences and the exceptional things, like what real people are like.” (Xander, age 33). “Well with me, there’s a lot of stigma still. And this is 2022. And there’s still a stigma with HIV. In this time, people that do not inform themselves and people that are ignorant in the behalf that they try to push you to the side…A lot of my friends and fellow peers have been rejected with their family, giving them paper plates and disposable utensils, because they are family do not get informed about HIV.” (Cindy, age 55). “Even on the commercials, what he sees is targeting the gay community…not just the gay community will have HIV…even the commercials sometimes stigmatizes people, because that is the connection. Everything pink. Pink, pink, pink. Even the cookies. So, it’s stereotyping.” (Alberto, age 62). While some participants were open about their HIV diagnosis such that “everybody” knew, others chose not to disclose their HIV status to co-workers, friends, and family due to stigma and negative judgment. They would “pretend” not to “have anything” to maintain a “normal” facade. “Not everybody in my circle knows because I think this is something you need to be very careful who you tell it to because of the stigma. Not because I think there’s something wrong with it per se.” (Xavier, age 39). Theme 6: Desired delivery of health education- This theme focused on the health information that participants expect to obtain and the desired approach to delivering health education in a technology-based behavioral intervention. The desired topics of health information were divided into two subthemes: (1) treatment and medication and (2) preventive and general health information. Preferred approaches, including tone, atmosphere, and methodological aspects of health education, were explored in the subtheme of health information delivery quality.
Treatment and medication Participants indicated that they wanted to learn more about HIV and current comorbid health problems, such as high blood pressure. They were particularly interested in symptom control, self-management strategies, and medication. They emphasized the importance of including updated information in the intervention (e.g., vaccination for monkeypox) and expressed a desire to obtain information on up-to-date HIV treatments and medications that are newly discovered. “… how to control all of the symptoms that I have, with getting through the medications.” (BMW, age 63). “I want to know more information, new information that you are coming out with. That’s why I want to learn more, because it’s always good to learn.” (Rob, age 30).
Preventive and general health information Participants expressed a desire to learn more about “preventive measures” and “what [they] can do to be better” in health, such as exercising, healthy eating, and even handling emergencies (e.g., layperson cardiopulmonary resuscitation). They wanted to know “how to avoid” the negative consequences of their health behaviors. A “decision tree” was suggested as a method to illustrate the outcomes of their actions. In addition to HIV, they were interested in learning about other conditions, including their risks, symptoms, treatability, and the types of health professionals who could serve as resources, even if these conditions were not of immediate concern. “I’m always trying to learn even stuff that I do not have. I do not have diabetes. I do not have high blood pressure. I do not have cancer. I do not have venereal diseases. I do not have hepatitis C. But I try to inform myself.” (Cindy, age 55).
Health information delivery quality This subtheme examined the specific strategies and quality of health information delivery that participants desired. Participants emphasized the need for comprehensive health information, referring to it as “different stuff,” “every aspect,” and “a little bit of everything.” They also mentioned that health education should be “quick and informative,” as a “long drawn out” format causes participants to “tune out” or “check out.” Educational materials in the intervention should be simple, use easy-to-understand terminology, and include examples (e.g., how a plate should look for a healthy diet). Additionally, they expressed a preference for a positive tone, noting that pervasive negative health-related news can discourage community members. Participants highlighted the importance of reliable, well-structured sources of information, and they favored learning from a health educator who would lead group health education sessions. They envisioned the health educator as a “leader” or “navigator” who could “start a conversation” and “steer them in the right direction” during the sessions. They expected the health educator to be a “licensed” medical provider who “knows more than [they] do.” “In my opinion, it should not be very scientific. You know this high, scientific words, you know something simple that everybody could understand.” (Jaime, age 61). “… learning more about different kinds of people, like the medical people who know about it, to teach us more information about it. That would be perfect.” (Rob, age 30). Theme 7: Comfort in using technology and accessibility- This theme focused on participants’ perceptions of using technology and their comfort levels with it. It also explored the factors influencing their access to technology. Most participants, except for two, indicated that they were generally comfortable using technology. They described technology as “standard” these days and effective for information dissemination. Additionally, they noted that technology has been “a big help” and a “very effective way to connect with people.” “I’m very comfortable with technology. I love it, actually. And, I’m very comfortable making friends with people over the Internet.” (Jesus, age 49). However, comfort levels varied depending on the medium used, such as preferences for text messaging, specific social media platforms, or gaming. Factors influencing comfort levels also included technical accessibility and cultural acceptability. Age was largely cited as a determinant of technical accessibility. Older adults participants were often “not tech-savvy” or perceived as such by younger participants, preferring “face-to-face” communications. In contrast, younger individuals were perceived to favor “quick” online interactions or gaming. Cultural factors also played a role in accessibility, with participants mentioning that technology use can vary by race and ethnicity. Two participants expressed that they were not comfortable with technology at all due to old age, long periods of incarceration, and not having a computer at home. Nevertheless, one of these participants showed a willingness to learn and use technology. (Translator response) “Not a comfort. That does not apply to him. He does not have a computer at home; he’s not tech savvy. And only because he’s an elderly person, …” (Eddy, age 65). “Oh well, that’s easy. Technology has been a big help. At first, I was ‘iffy’ about it because I’m really old school. I was raised by a mother that was straight up Puerto Rican from the hills of the island. But, technology kind of grows on you if you allow it to. So, in the past couple of years, I’ve been able to actually meeting in person some Facebook friends locally in the area, and you know, so I’ve made some really good friendships through technology, yes, through the Internet, and they seem to be going very well…The only barrier that I would say to something like that would be, there are a lot of people in my community, in the black and brown community, that aren’t very tech savvy. So, they really would not know how to maneuver and you know… So, I think maybe…I do not know. It’s something that is a problem, and yes…” (Cheo, age 55). To increase accessibility, participants emphasized the ease of use and the need for training before using general technology or specific technology-based modalities (e.g., navigating gaming interfaces). Providing “how-to videos” was suggested as a potential method to facilitate learning. (Translator response) “He says he would not mind, but he needs to be trained, so he’s not comfortable in doing it because he does not know how to do it. But, if someone trains me, then I would be more comfortable in doing it.” (Roseman, age 62). Theme 8: Ways to nurture engagement in technology-based behavioral interventions- This theme centered around characteristics and activities interviewees desired to see in a virtual community space for health education that would encourage their active and sustained participation.
Interaction with peers Participants desired to meet other community members through interactions that mirrored ones in real life, such as support groups and health education conversations that would be “interactive and mutual.” Additionally, participants suggested that community members could intentionally “meet new people” and “socialize” with one another by including a general profile of interests and the ability to guide other players within the virtual environment to retrieve information. In regards to introverted individuals, some interviewees were unsure of their willingness to participate, while others thought that the space would help those “not ready to come out to the world” to “connect with others and let them know that they are not alone.” “And they actually did not take their medication for a long time because of being in denial. But when they realize they are not alone in a video game that they can be playing by themselves at their house, it connects them with this universe of people that are feeling the same way they are. It could be helpful to them.” (Xavier, age 39).
Fun Participants prioritized the aspect of “fun” and “games” when probed about desired activities in a virtual environment to motivate community members to uptake health information. Participants emphasized that “medical” and “learning” material could be woven into non-educational activities and should use attention-grabbing words, not boring jargon, for laypersons. Competition in a gamified setting was highlighted as a common motivator to engage and retain user participation. Some participants wanted action-oriented, “violent” activities such as “killing” or “attacking” antagonists such as “bad guys” or “heart disease” that represented the health conditions users would be trying to prevent or overcome. “Special guests,” such as drag queens, would “grab someone’s attention” and keep them “tuned in” over time. “This is a game so you have to keep it fun. Do not make it too…you are in school, you are doing your work and the teacher asks a question and everybody is raising their hands to see who can answer the quickest. You get home and it’s time to do homework and you do not even want to sit down and do it. You have to keep them interested; keep people…it’s not just medical, you can also put fun, regular things in here…quiz them on cars or capitals of states…small things…and get their attention. As long as you keep it fun, I feel like the healthy part can just be mixed in there, blended all in.” (Success, age 41 and James, age 35). “A Monopoly game where the correct answer, throw the dice. It has to be competitive. Like I have to compete against somebody. I’m thinking about part of the game could be somewhere where people can talk. And then the rules in this house or in this club could be the games. So, I would invite people like; hey, nice to meet you. I’m Xavier. Let us talk a little bit. Hey, you like this. You like that. You know what? I challenge you to this game. So, we both get into that section on the club and start competing.” (Xavier, age 39).
Innovation Participants expressed interest in the use of avatars due to their technological novelty and customization. They noted that avatars would “grab” their attention in a virtual space, and the virtual environment itself evoked interest since participants “did not have that in the past” to deliver information. “… the avatar is also good as well because a lot of the kids right now, that’s the way of what they are doing, so they can change their faces and so forth.” (Peter, age 46).
Diversity and inclusion This subtheme included participants’ views on the current limits and desired inclusion of various languages, cultures, and ages in the behavioral interventions. They emphasized multilingual content to prevent “language barriers” and ensure that participants “understand what they are seeing.” They also desired the inclusion of “Hispanic” and “Afro” cultures, such as through the use of culturally familiar foods in diet education, so that they could more easily relate to the information given. One participant also deeply emphasized the unmet need for a support space for community members over the age of 40 years based on the lack of such spaces for this age group. Participants noted that depictions of avatars and characters within a virtual space should be “broad” and represent a wide spectrum of gender identities, body types, and clothing preferences. “… for example, if you are talking about, what is good to eat, in order to have a healthy life? If you tell me, ok, do not eat rice and gondolas, do not eat plantains, I know that plantains and [gandules] identify Latino people, in my opinion, identifies myself. But if you tell me, oh, it’s better for you to eat broccoli and dah, dah, dah, I say, oh, that is not Latino. Even though I know it is healthy to eat broccoli, but it’s not close to Me.” (Jaime, age 61). “That’s one of my big issues. And I’m being totally honest about that. Any group support, anything; oh, you have to be under 40. You have to be between 18 and 35. And I always say; what about people over 40? We still have HIV. We still have problems.” (Atlantic, age 47). “But when I say “make it broad” like really open, I’m talking about all types of things; gender, also clothing, also… Because those are expressions.” (Xavier, age 39).
Trivia This subtheme described participants’ interest in the use of trivia-like games as a feature to facilitate health information uptake. They suggested that the implementation of “true-or-false multiple choice” and trivia games in general would encourage users to learn about health “conditions.” Trivia would also increase the depth of community members’ knowledge about their own conditions when they were unable to attain the knowledge from other information sources. “But they are supposed to have a trivia like that. Like okay, I have cancer. I have liver problems. It’s connecting with my HIV or whatever ailment you have. And they give you the information where you can go. And they tell you where you can go or who you can call, but that’s it.” (Atlantic, age 47).
Visualization and posting Participants suggested several means of communication for effective health education to community members. They desired spoken content and “visual” content, such as videos and diagrams, rather than written content alone, to capture users’ attention. Brief video “series” were thought to retain attention over time. Posting “billboards,” “closed captioning,” and occasional “PSAs” (i.e., public service announcements) were also suggested to deliver health information in an obvious manner without disrupting the experience of navigating a virtual environment. (Translator response) “He says, one, you can do videos, and you can also give health messages on how medication improves health conditions. And also, you can post them throughout like, let us say, billboards, or commercials, stuff like that.” (Roseman, age 62). “I think informational links would be like diagrams and stuff, because everything is visual right now. People aren’t going to sit there and want to read a whole bunch of, you know, stuff, because everything now is like, you know, even with social media, it’s flip, you know, flip, flip, flip. So you know, even like a three-minute video with something, you know, more like a series. Like, one day you watch a video, then the next day you watch another video that is like five-minutes long. So, that keeps people’s attention where you give them like a cliff hanger at the end so that way they will want to watch the next video.” (Jay, age 30).
User-specific engagement preferences While most participants suggested their preferred approaches, some acknowledged that engagement depends on each user’s personal interests or preferences, regardless of their ability to use the technology or its intriguing features. This means that what is available or useful to one person might not be to another. “It depends on the person and how frequently they are on the app as well.” (Mr. Jean Pierre, age 50). “Now if you say I’ve got the most potential and you feel like I’m qualified to play for the NBA, does not mean that I want to play for the NBA. Okay?” (Bunny, age 32). One participant expressed a dislike of “meeting people he does not know,” even though he was comfortable with using technology and interested in behavioral health education. A few others responded that they were “not going to actually use” the program due to concerns about security and a lack of interest in the gaming format. In contrast, another opinion was that people end up using technology as a necessary tool of current trends, despite personal dislike or potential adverse effects. “At the end of the day, this would be a tool. … It’s like a car. A car is a tool for you to move. But if you use it wrong, you can kill somebody. So at the end of the day, people need to understand that. These are tools that you are going to use, and you decide how to use them.” (Xavier, age 39). Theme 9: Nurturing a safe space among users in technology-based behavioral interventions- Participants emphasized the need for technology-based environments to feel like safe spaces where they could choose how much personal information to share, including the option to stay anonymous. Personal privacy preferences were influenced by distrust of digital interactions due to bad actors.
Privacy in virtual environments Participants understood that privacy was valued differently among community members and that personal preferences for privacy could change over time. While some individuals were “open” and “comfortable” sharing their HIV status and “real name,” they still supported others’ needs to remain anonymous and use avatars until ready to share more about themselves in a virtual environment. “Well, of course privacy is very important. But, I think that if I know the decision should be made by the player. So if the player wants to use his real picture, for example, that’s ok. But if the player prefers to have an avatar, that should be ok too.” (Jaime, age 61).
Distrust and safety concerns This subtheme explored various concerns that participants held while using online technology. They understood that individuals they met online may be “shallow” and not forthcoming with their true identity, and thus expressed caution in meeting with such individuals in real life. Another concern was the potential of a closed virtual space to be infiltrated by bad actors who did not identify as community members and who may “prey on people.” Tracking information such as cookies and unrequested follow-up messages discouraged participants from logging onto certain online websites and applications. “Mean for the same reason. If someone shows themselves like this person and they sustain that, and then I’m interested in meeting that person, and it comes to be that that person is not what they described. I’m describing first what can go wrong. Hmm. And even worse things could happen. Like let us meet somewhere. Of course, you need to be really careful in these types of situations. It’s a very well-known rule, even with games, technology, and apps, that you can see the person, and you are not going to meet their person in their apartment.” (Xavier, age 39). “People can go online just to meet people, like even though it would be something that is around something positive, there are always those people who will try to like prey on people like that. And like somebody might join it and say yes, I’m a party of the community, and you know, learn all this information, get all the facts, just to like find somebody that they can connect and do some real craziness. Like no, maybe they are a killer, I do not know. I do not play those games” (Xander, age 33). “Privacy I think it’s the main, main, #1 thing. You have to have an app with privacy. I go here. But I know when I’m finished and I close that app or whatever name is that app, they are not going to be popping up in my emails as SPAM, or whatever you call it in emails, or in my Facebook or my Twitter or whatever. I know they are not being connected.” (Atlantic, age 47).
Step 2: logic model of change and matrix of objectives Based on the identified problems and needs in Step 1, we developed a logic model of change that outlines the expected program outcomes and their determinants (see ). In this model, outcomes are categorized into distal and proximal, reflecting the overarching goals of CVD prevention and CVH promotion through a technology-based behavioral intervention. Distal outcomes, which represent the primary goals of the intervention, include CVH-related physiological and psychological measures, such as blood pressure, total serum cholesterol, hemoglobin A1c, Body Mass Index (BMI), and depression severity. Proximal outcomes consist of specific behaviors crucial to achieving these goals: informed decision-making, CVH-promoting behaviors, self-management and symptom control, health care access and medical adherence, and social support. These proximal outcomes are directly influenced by key environmental and behavioral determinants, including knowledge, belief, medical distrust, stigma and discrimination, and culture. To achieve the desired outcomes, we established POs at the behavioral level. For each determinant, we identified specific COs that align with the corresponding POs, detailing the actions necessary to drive these changes (see ). This structured approach ensures that each determinant is addressed systematically to promote the intended health outcomes.
Step 3: theory-based methods Diffusion of Innovations theory ( ) was selected as a conceptual framework for this study. This theory explores how “new ideas, practices, and technologies” become more familiar and widely adopted within society. It encompasses five key components: (1) innovation attributes—the features of the innovation that influence its adoption; (2) adopter innovativeness—the characteristics and willingness of individuals to embrace new ideas; (3) social system and opinion leaders—the structure and influential figures who can shape attitudes and behaviors; (4) adoption process—the stages an individual goes through when adopting the innovation; and (5) diffusion system—change agency/agents and their methods of promoting the innovation within the social system ( ). This theory has been frequently used in health intervention research, including studies involving sexually, racially, and ethnically minoritized men and those living with HIV ( , ). Given that this study focused on the adoption of innovative health behaviors through a technology-based intervention for CVD prevention, the Diffusion of Innovations theory was well-suited to guide the research. In developing this intervention, which targeted Black and Latinx sexual minority men living with HIV, we also incorporated the Social Determinants of Health Framework as applied to racial and ethnic disparities in CVD outcomes ( ). This framework examines how various social, economic, and environmental factors contribute to CVH inequities, highlighting the considerable impact of structural racism and discrimination as key drivers of these disparities. Given our focus on a population from sexually minoritized and historically disadvantaged racial and ethnic communities, the Social Determinants of Health Framework provided a strong foundation for the research. 3.3.1 Practical strategies The practical strategies for this protocol were developed using the Intervention Mapping framework, emphasizing culturally tailored, digital tools like avatar-led videos and virtual environments ( ). These tools were designed to address specific barriers faced by Black and Latinx sexual minority men with HIV, such as medical distrust and stigma ( ). Additionally, the virtual environment behavioral intervention was premised on recommendations for CVH. The American Heart Association created Life’s Essential 8, a set of key health metrics for promoting CVH. These metrics include: (1) maintaining a heart-healthy diet, (2) engaging in physical activity (at least 150 min of moderate-intensity aerobic activity or 75 min of vigorous activity per week), (3) eliminating nicotine exposure (smoking and secondhand smoke), (4) prioritizing sleep health (7–9 h of quality sleep per night for adults), (5) achieving and maintaining a healthy body weight (BMI between 18.5–24.9), (6) managing cholesterol levels (low-density lipoprotein, high-density lipoprotein, and triglycerides), (7) controlling blood glucose (fasting blood glucose under 100 mg/dL or HbA1c less than 5.7%), and (8) maintaining optimal blood pressure (less than 120/80 mmHg) ( , ). Recently, the American Heart Association published stroke prevention guidelines which addressed the importance of risk assessment in transgender women ( ). The expansion of recommendations addressing underrepresented populations is advantageous toward inclusivity and better health for all. When designing interventions, grounding programs in practical strategies could facilitate the uptake and adoption of heart health behaviors and ensure that health promotion is both accessible and relevant to a community’s unique cultural and social needs ( ). Moreover, valuing the lived experiences of the target community, respecting and incorporating cultural values, and prioritizing the voices of the community in shaping behavioral interventions enhance the promise of achieving optimal health ( ). When seeking to conduct research with ethnic and racial communities, investigators should acknowledge their social positioning, such as being someone who may or may not share the same community or lived experiences as their sample population. Acknowledging positionality is necessary to foster trust, ensure the ethical conduct of research, and make research outcomes relevant and beneficial for the communities involved ( , ).
Practical strategies The practical strategies for this protocol were developed using the Intervention Mapping framework, emphasizing culturally tailored, digital tools like avatar-led videos and virtual environments ( ). These tools were designed to address specific barriers faced by Black and Latinx sexual minority men with HIV, such as medical distrust and stigma ( ). Additionally, the virtual environment behavioral intervention was premised on recommendations for CVH. The American Heart Association created Life’s Essential 8, a set of key health metrics for promoting CVH. These metrics include: (1) maintaining a heart-healthy diet, (2) engaging in physical activity (at least 150 min of moderate-intensity aerobic activity or 75 min of vigorous activity per week), (3) eliminating nicotine exposure (smoking and secondhand smoke), (4) prioritizing sleep health (7–9 h of quality sleep per night for adults), (5) achieving and maintaining a healthy body weight (BMI between 18.5–24.9), (6) managing cholesterol levels (low-density lipoprotein, high-density lipoprotein, and triglycerides), (7) controlling blood glucose (fasting blood glucose under 100 mg/dL or HbA1c less than 5.7%), and (8) maintaining optimal blood pressure (less than 120/80 mmHg) ( , ). Recently, the American Heart Association published stroke prevention guidelines which addressed the importance of risk assessment in transgender women ( ). The expansion of recommendations addressing underrepresented populations is advantageous toward inclusivity and better health for all. When designing interventions, grounding programs in practical strategies could facilitate the uptake and adoption of heart health behaviors and ensure that health promotion is both accessible and relevant to a community’s unique cultural and social needs ( ). Moreover, valuing the lived experiences of the target community, respecting and incorporating cultural values, and prioritizing the voices of the community in shaping behavioral interventions enhance the promise of achieving optimal health ( ). When seeking to conduct research with ethnic and racial communities, investigators should acknowledge their social positioning, such as being someone who may or may not share the same community or lived experiences as their sample population. Acknowledging positionality is necessary to foster trust, ensure the ethical conduct of research, and make research outcomes relevant and beneficial for the communities involved ( , ).
Discussion The purpose of this study was to map a CVD prevention intervention for Black and Latinx sexual minority men with HIV using an iterative, evidence-based health promotion framework. Incorporating qualitative methods for local needs assessment in the Intervention Mapping approach allowed community voices to shape and tailor this informed intervention ( ). The local needs assessment aimed to understand participants’ health priorities in an attempt to develop culturally salient interventions. Through qualitative semi-structured interviews, Black and Latinx sexual minority men living with HIV elucidated their specific health priorities, particularly regarding the management of HIV and CVD. These insights directly influenced the content included in the intervention, which was designed to reduce stigma, enhance engagement, and improve overall CVH outcomes. We carefully considered the dynamics of intersecting identities when applying the Intervention Mapping framework to develop this behavioral intervention. Race, ethnicity, and sex were not viewed merely as demographic factors, but as intersectional influences that are closely linked to health behaviors and outcomes ( ). By tailoring the intervention specifically for Black and Latinx sexual minority men living with HIV and incorporating an intersectional perspective, we gained a deeper understanding of their need for culturally relevant education and support. This approach allowed us to acknowledge and respect the diverse lived experiences of the participants. We found that participants were using various online modalities as sources of health information. According to national data, the use of these modalities spans across different age and income groups. Polling data from 2023 indicated that a majority of adults aged 30 to 64 regularly used the internet (96–98%), and even 88% of adults age 65 and older reported regular internet use ( ). Additionally, while participants expressed trepidations around the financial implications of using digital platforms for the purposes of conveying health literacy information, a survey conducted by Pew Research Center found that even among households earning less than $30,000 annually, 79% of individuals owned a smartphone ( ). Given the ubiquity of smartphones and internet access, it has become evident that the digital divide is narrowing both in terms of accessibility and demographic use. Therefore, the opportunity for future interventions to leverage digital technologies as a means to engage with a broader range of communities will be crucial for advancing health promotion initiatives. The National Science and Technology Council highlights the importance of fostering safety, equity, and engagement within the realm of Social-Behavioral Science ( ). Our intervention promotes these values by creating an inclusive environment where Black and Latinx sexual minority men can feel safe discussing living with HIV and its associated health concerns without fear of stigma. By ensuring inclusivity and representation in the intervention design, more equitable access to health education and community engagement are promoted, ultimately leading to improved health outcomes. We observed the potential for these digital tools to enhance health promotion by leveraging a digital platform to reach participants who might otherwise be hesitant to engage in traditional face-to-face interactions. This provides both the individual users and the wider community with greater flexibility, allowing for a more convenient access point to safe, evidence-based health information. With CVD projected to increase in prevalence, especially in persons with HIV, emphasis should be placed on the critical need for innovative strategies that integrate digital tools for community-driven health promotion. Community-led initiatives are essential for achieving long-term health equity, as they enable individuals to play an active role in addressing their own health needs and priorities. This paper focuses on the initial three steps of Intervention Mapping: (1) assessing community needs; (2) identifying expected program outcomes and objectives; and (3) selecting theory-based methods and practical strategies, which were used to describe the approach we took to develop a behavioral intervention for CVD prevention in Black and Latinx sexual minority men living with HIV. Future research is essential to explore remaining steps of Intervention Mapping: (4) producing program components; (5) planning for implementation; and (6) planning for evaluation of the intervention. These subsequent steps are crucial for understanding the intervention’s long-term impact within diverse community settings. Further investigation in these areas will contribute to the refinement of strategies aimed at promoting health equity and addressing CVD prevention in underserved populations. 4.1 Limitations This study is not without limitations. First, the findings may not be generalizable to the broader population, as the sample size, although within qualitative recommendations, may not capture the full diversity of experiences and perspectives. However, we mitigated this limitation by employing measures of rigor, including a detailed interview guide, peer debriefings, and member checking, to ensure the accuracy and dependability of the data. Second, the use of Intervention Mapping as a framework for intervention development may be limited by its rigid and linear approach, which may not fully account for the complexities and nuances inherent in real-world interventions. However, we addressed this limitation by adopting a bottom-up approach, actively engaging with community members and incorporating their thoughts and perspectives into the intervention design. This collaborative approach can enhance the intervention’s effectiveness and sustainability. Third, all participants had health care access and may limit generalizability to uninsured persons. Fourth, we assessed perceptions of chronic conditions using survey measures, which carries the limitation of social desirability bias. However, our use of validated measures provided a comprehensive understanding of their perceptions, which has informed the development of this culturally salient CVD prevention intervention for Black and Latinx sexual minority men living with HIV.
Limitations This study is not without limitations. First, the findings may not be generalizable to the broader population, as the sample size, although within qualitative recommendations, may not capture the full diversity of experiences and perspectives. However, we mitigated this limitation by employing measures of rigor, including a detailed interview guide, peer debriefings, and member checking, to ensure the accuracy and dependability of the data. Second, the use of Intervention Mapping as a framework for intervention development may be limited by its rigid and linear approach, which may not fully account for the complexities and nuances inherent in real-world interventions. However, we addressed this limitation by adopting a bottom-up approach, actively engaging with community members and incorporating their thoughts and perspectives into the intervention design. This collaborative approach can enhance the intervention’s effectiveness and sustainability. Third, all participants had health care access and may limit generalizability to uninsured persons. Fourth, we assessed perceptions of chronic conditions using survey measures, which carries the limitation of social desirability bias. However, our use of validated measures provided a comprehensive understanding of their perceptions, which has informed the development of this culturally salient CVD prevention intervention for Black and Latinx sexual minority men living with HIV.
Conclusion The purpose of this study was to map a CVD prevention intervention for Black and Latinx sexual minority men with HIV using Intervention Mapping, an iterative, evidence-based health promotion framework. Qualitative methods enabled us to integrate community perspectives, shaping the culturally salient intervention tailored to our target population. Findings from this study underscore the critical need for interventions that address the intersecting identities and unique health priorities of Black and Latinx sexual minority men living with HIV. Future research should continue to prioritize community-engaged, technology-based strategies to promote CVH equity in this population.
|
The application of varying amount of green manure combined with nitrogen fertilizer altered the soil bacterial community and rice yield in karst paddy areas | 79cf871d-0755-4841-8c33-1bcbfc6eb8b9 | 11229212 | Microbiology[mh] | The global advocacy for green and clean energy aims to mitigate the environmental toxicity caused by chemical fertilizers . Chinese Milk vetch ( Astragalus sinicus L.) is frequently utilized as a leguminous green manure in rotation with rice in southern China, markedly diminishing environmental risks while enhancing soil fertility and rice yields . Studies have demonstrated that incorporating green manure can effectively substitute 20–40% of the chemical N fertilizers, presenting a highly efficient approach for optimizing fertilizer application . Green manure is typically sown during the winter fallow season and subsequently integrated into the paddy field at their blooming period . The application of green manure significantly enhanced nutrient availability and hence improved rice yield . This improvement can be attributed to the nutrient-rich composition of green manure, including N, P and K, as well as gradual release of atmospheric N fixed in its roots during the decay process, ensuring a steady supply of nutrients for subsequent growth of rice . However, research has emphasized significant differences in rice utilization efficiency among different amounts of green manure and fertilizer inputs , yet the underlying reason for such variations remain unclear. One plausible explanation is that organic materials released nutrients relatively slowly, whereas early-stage fertilization can rapidly supply nutrients to rice . Over time, as green manure decomposes and releases nutrients, rice can continuously absorb them, thereby enhancing its balanced nutrient supply capability, particularly the stable N supply in the soil . Further investigation revealed that the degradation of green manure and the conversion of N by soil microorganisms may be more critical factors influencing rice nutrient absorption . Soil microorganisms played critical roles in decomposing green manure, thereby releasing nutrients that can be absorbed and utilized by rice . Studies have shown that applying green manure promoted microbial growth and reproduction, thereby facilitating the release of nutrients from green manure .Taxa such as Proteobacteria, Bacteroidetes, and Ascomycota, which thrive in nutrient-rich environments, were particularly stimulated by this process, thus improving rice’s efficiency in utilizing green manure . Green manure is commonly used in combination with fertilizers. However, the addition of exogenous N significantly reduced the soil C/N ratio and disturbed soil nutrient patterns , affecting microorganisms’ access to available resources and altering the overall composition and function of keystone taxa within the microbial community , ultimately impacting rice yield. Keystone taxa play a crucial role in regulating microbial community structure and function, and co-occurrence network can help us identify them . The decrease in complexity of the network may potentially result in the loss of microbial function of keystone taxa. Many pieces of evidence have shown that excessive loading of N critically reduced the diversity of microbial community, inhibiting N fixation , as well as the nitrification and denitrification capacity of certain bacterial functional groups . Therefore, comprehending the impact of different amounts of green manure, especially when combined with N fertilizer, on microorganisms and their functions is crucial for understanding their roles in enhancing the utilization efficiency and yield of rice. Carbonate rocks are widely distributed in karst areas , hosting bacterial communities on their surfaces that play pivotal ecological roles, including N fixation, nitrate metabolism, and carbon-inorganic compound metabolism . Calcareous soil derived from carbonate dissolution and weathering exhibits alkaline and calcium-rich characteristics , harboring unique microbial communities with distinct functionalities . The application of green manure and chemical fertilizers can reduce soil pH, thereby influencing the soil environment and subsequently impacting the microbial community . This alteration may lead to the proliferation of specific functional microorganisms, consequently affecting soil element cycling and nutrient uptake by rice plants, ultimately influencing rice yield. Based on the aforementioned findings, we formulated a scientific hypothesis positing that the cultivation of green manure alone or in combination with N fertilizer in karst paddy fields would promote rice yield by modifying the soil nutrient and affecting the composition and function of the soil microbial community. Herein, we present the methodology and results of a three-year-long field experiment conducted to assess the effects of different fertilization regimes on rice yields, soil nutrients, and the soil bacterial community in typical brownish-yellow soil within karst regions. Our primary objectives were to address the following four questions: (1) What are the effects of varying amounts of green manure, both independently and in combination with N fertilization, on soil nutrient and rice yield? (2) How do different fertilization regimes affect bacterial community diversity and structure, as well as keystone taxa? (3) What are the relationships between soil nutrients, the soil bacterial community, and rice yield? (4) How does the interaction between fertilization regimes and microbial dynamics influence rice productivity? Through rigorous experimentation and analysis, we aimed to provide comprehensive insights into the complex interplay between fertilization regimes, soil microbial dynamics, and rice productivity in karst paddy environments.
Site description The experimental site is located in Nanning County, Guangxi Province, China (107° 51’21’’ E, 23°0’ 41’’ N). The region has a subtropical monsoon climate with an annual average temperature of 21.6℃, precipitation of approximately 1,300 mm, and an average altitude of 64 m. The soil was classified as brown-yellow lime soil derived from carbonate rock salt. The basic physical and chemical properties of the soil at a depth 0–20 cm are as follows: pH 7.03, soil organic carbon (SOC) 17.4 g/kg, total nitrogen (TN) 1.96 g/kg, available nitrogen (AN) 158.1 mg/kg, available phosphorus (AP) 11.7 mg/kg, and available potassium (AK) 86 mg/kg. The N content of green manure was 32.3 g/kg. Experimental design and plant material The experiment was initiated in 2017, implementing a double-cropping system for rice cultivation. The rice variety used, Guiyu 9, was obtained from the Rice Research Institute of the Guangxi Academy of Agricultural Sciences. The green manure variety, Chinese milk vetch ( Astragalus sinicus L.), with the seed name Guizi 7, was sourced from the Agricultural Resources and Environment Research Institute of the Guangxi Academy of Agricultural Sciences. The milk vetch was uniformly sown 1–2 weeks before the late rice harvest, cultivated during the winter fallow season, and subsequently incorporated into the paddy field at the peak of its blooming . Due to insufficient in-situ green manure, milk vetch was harvested and weighed from an alternative site, achieving return amounts of 45 t/ha and 67.5 t/ha of green manure to the field, respectively. A total of eight treatments were administered: the group without N addition, which included (i) no N fertilizer and no GM (N 0 M 0 ), (ii) 22.5 t/ha GM (N 0 M 22.5 ), (iii) 45 t/ha GM (N 0 M 45 ), and (iv) 67.5 t/ha GM (N 0 M 67.5 ). The group with N addition included (v) N fertilizer and no GM (NM 0 ), (vi) N fertilizer and 22.5 t/ha GM (NM 22.5 ), (vii) N fertilizer and 45 t/ha GM (NM 45 ), and (viii) N fertilizer and 67.5 t/ha GM (NM 67.5 ). We employed a randomized-block design, with three replications of each treatment. Each experimental plot area was 16.5 m 2 (3.3 m × 5 m), separated by ridges to prevent water and nutrient movement between plots. The fertilizers used were urea (containing 46.4% Nitrogen), calcium superphosphate (containing 18.0% P 2 O5), and potassium chloride (containing 60% K 2 O), respectively. The N fertilizer applied to the rice was 195 kg/ha in the first year and 180 kg/ha in the second and third years. The phosphorus (P) and potassium (K) application rates remained consistent each year, with 90 kg/ha of phosphorus and 120 kg/ha of potassium. Forty percent of the N, P, and K fertilizers were applied as a basal application, while the remaining 60% were divided equally for top-dressing at the tillering and jointing-booting stages. Soil sampling and physicochemical analysis Soil samples were collected from the surface soil (0–20 cm) of each plot in July 2020. To reduce variability, five soil cores were collected from each plot using the “S” method and mixed together to form one sample. A total of 24 soil samples were transported back to the laboratory, where rice roots and stones were removed. The sampled soil was divided into two parts: one part was stored at – 80 °C for microbial sequencing, and the other part was air-dried for soil physicochemical analysis. Soil pH was tested using the potentiometric method with a soil-to-water ratio of 1:2.5 (weight: volume). Soil organic carbon (SOC) was measured using the potassium dichromate and sulfuric oxidation method. Total nitrogen (TN) was tested using the automatic Kjeldahl method. Total phosphorus (TP) was tested using the molybdenum-antimony colorimetric method. Available N (AN) was determined using the ferrous sulfate-reducing agent diffusion method. Available phosphorus (AP) was measured using the molybdenum-antimony counterstain method with sodium bicarbonate extraction. Available potassium (AK) was measured using ammonium acetate exchange flame photometry. Exchangeable calcium (E-Ca) and exchangeable magnesium (E-Mg) were determined by ammonium acetate exchange-atomic absorption spectrophotometer. Soil physicochemical analyses were conducted according to the methods described by Lu . DNA extraction and bioinformatic analysis Soil DNA was extracted from 2.5 g of flesh soil using the PowerSoil DNA Isolation Kit for Soil (Mobio Laboratories, Inc., Carlsbad, CA, USA). The V4–V5 fragment of the bacterial 16S rRNA genes was amplified with the primer pair 515F (5’-GTGCCAGCMGCCGCGGTAA-3’) and 907R (5’-CCGTCAATTCMTTTRAGTTT-3’) . The PCR amplification conditions were denaturation at 95 °C for 10 s, annealing at 55 °C for 30 s, and extension at 72 °C for 45 s. The 16S rRNA sequences were then conducted on the Illumina NovaSeq high-throughput sequencing platform by MAGIGENE (Guangdong, China). Forward and reverse sequences were spliced using the FLASH 1.2.11 software, and low-quality sequences (average quality score lower than 200_bp) and chimeras were removed. The remaining high-quality reads were aligned and clustered into operational taxonomic units (OTUs) with a similarity level of 97% using USEARCH software. The representative OTUs were then compared with the SILVA 132 16 S rRNA databases to determine the taxa of each sample. To eliminate potential bias caused by different sequencing depths, the OTU tables were rarefied to the minimum read number of all samples (each with 36,263 reads after quality control). Alpha diversity indices and beta diversity distance matrices were calculated using the QIIME software , based on the randomly sampled OTU tables with the same sequence depth. Phyla and classes with a relative abundance of ≥ 1% were defined as dominant phyla and class . The 16S rRNA gene sequences were deposited into the NCBI Sequence Read Archive database with the number PRJNA1031136. Statistical analysis One-way ANOVA and two-way ANOVA were conducted using SPSS 25.0 software to compare the differences in rice yield, soil properties, and alpha diversity among different treatments. Principal Component Analysis (PCA) and Analysis of Similarity (ANOSIM) were conducted with the R package “vegan” to assess the differences in soil bacterial community compositions. Redundancy analysis (RDA) using CANOCO 5.0 and Mantel test using R package “devtools” were used to evaluate correlations between bacterial community and soil factors. Structural equation modeling (SEM) was employed to analyze the potential direct and indirect effects of soil factors and microbial factors on rice yield caused by fertilization . The SEM analysis was conducted using the robust maximum likelihood evaluation method in AMOS 28.0 (AMOS IBM, USA) . Network analyses and keystone species Co-occurrence networks were used to assess microbial complexity and identify potential keystone taxa. To avoid spurious correlations, soil bacteria OTUs with a relative abundance greater than 0.1% underwent Spearman correlation analysis, corrected using false discovery rate correction . The R package “ psych” was used to construct the correlation network, with correlation coefficients above 0.6 and p-values below 0.05 were regarded as elements of networks . Networks were then visualized using Gephi . The OTUs with the highest degree and highest closeness centrality were identified as keystone taxa , and the sum of these two values was transformed into a Z-score . Z-score values greater than 1.0 were selected as keystone taxa.
The experimental site is located in Nanning County, Guangxi Province, China (107° 51’21’’ E, 23°0’ 41’’ N). The region has a subtropical monsoon climate with an annual average temperature of 21.6℃, precipitation of approximately 1,300 mm, and an average altitude of 64 m. The soil was classified as brown-yellow lime soil derived from carbonate rock salt. The basic physical and chemical properties of the soil at a depth 0–20 cm are as follows: pH 7.03, soil organic carbon (SOC) 17.4 g/kg, total nitrogen (TN) 1.96 g/kg, available nitrogen (AN) 158.1 mg/kg, available phosphorus (AP) 11.7 mg/kg, and available potassium (AK) 86 mg/kg. The N content of green manure was 32.3 g/kg.
The experiment was initiated in 2017, implementing a double-cropping system for rice cultivation. The rice variety used, Guiyu 9, was obtained from the Rice Research Institute of the Guangxi Academy of Agricultural Sciences. The green manure variety, Chinese milk vetch ( Astragalus sinicus L.), with the seed name Guizi 7, was sourced from the Agricultural Resources and Environment Research Institute of the Guangxi Academy of Agricultural Sciences. The milk vetch was uniformly sown 1–2 weeks before the late rice harvest, cultivated during the winter fallow season, and subsequently incorporated into the paddy field at the peak of its blooming . Due to insufficient in-situ green manure, milk vetch was harvested and weighed from an alternative site, achieving return amounts of 45 t/ha and 67.5 t/ha of green manure to the field, respectively. A total of eight treatments were administered: the group without N addition, which included (i) no N fertilizer and no GM (N 0 M 0 ), (ii) 22.5 t/ha GM (N 0 M 22.5 ), (iii) 45 t/ha GM (N 0 M 45 ), and (iv) 67.5 t/ha GM (N 0 M 67.5 ). The group with N addition included (v) N fertilizer and no GM (NM 0 ), (vi) N fertilizer and 22.5 t/ha GM (NM 22.5 ), (vii) N fertilizer and 45 t/ha GM (NM 45 ), and (viii) N fertilizer and 67.5 t/ha GM (NM 67.5 ). We employed a randomized-block design, with three replications of each treatment. Each experimental plot area was 16.5 m 2 (3.3 m × 5 m), separated by ridges to prevent water and nutrient movement between plots. The fertilizers used were urea (containing 46.4% Nitrogen), calcium superphosphate (containing 18.0% P 2 O5), and potassium chloride (containing 60% K 2 O), respectively. The N fertilizer applied to the rice was 195 kg/ha in the first year and 180 kg/ha in the second and third years. The phosphorus (P) and potassium (K) application rates remained consistent each year, with 90 kg/ha of phosphorus and 120 kg/ha of potassium. Forty percent of the N, P, and K fertilizers were applied as a basal application, while the remaining 60% were divided equally for top-dressing at the tillering and jointing-booting stages. Soil sampling and physicochemical analysis Soil samples were collected from the surface soil (0–20 cm) of each plot in July 2020. To reduce variability, five soil cores were collected from each plot using the “S” method and mixed together to form one sample. A total of 24 soil samples were transported back to the laboratory, where rice roots and stones were removed. The sampled soil was divided into two parts: one part was stored at – 80 °C for microbial sequencing, and the other part was air-dried for soil physicochemical analysis. Soil pH was tested using the potentiometric method with a soil-to-water ratio of 1:2.5 (weight: volume). Soil organic carbon (SOC) was measured using the potassium dichromate and sulfuric oxidation method. Total nitrogen (TN) was tested using the automatic Kjeldahl method. Total phosphorus (TP) was tested using the molybdenum-antimony colorimetric method. Available N (AN) was determined using the ferrous sulfate-reducing agent diffusion method. Available phosphorus (AP) was measured using the molybdenum-antimony counterstain method with sodium bicarbonate extraction. Available potassium (AK) was measured using ammonium acetate exchange flame photometry. Exchangeable calcium (E-Ca) and exchangeable magnesium (E-Mg) were determined by ammonium acetate exchange-atomic absorption spectrophotometer. Soil physicochemical analyses were conducted according to the methods described by Lu .
Soil samples were collected from the surface soil (0–20 cm) of each plot in July 2020. To reduce variability, five soil cores were collected from each plot using the “S” method and mixed together to form one sample. A total of 24 soil samples were transported back to the laboratory, where rice roots and stones were removed. The sampled soil was divided into two parts: one part was stored at – 80 °C for microbial sequencing, and the other part was air-dried for soil physicochemical analysis. Soil pH was tested using the potentiometric method with a soil-to-water ratio of 1:2.5 (weight: volume). Soil organic carbon (SOC) was measured using the potassium dichromate and sulfuric oxidation method. Total nitrogen (TN) was tested using the automatic Kjeldahl method. Total phosphorus (TP) was tested using the molybdenum-antimony colorimetric method. Available N (AN) was determined using the ferrous sulfate-reducing agent diffusion method. Available phosphorus (AP) was measured using the molybdenum-antimony counterstain method with sodium bicarbonate extraction. Available potassium (AK) was measured using ammonium acetate exchange flame photometry. Exchangeable calcium (E-Ca) and exchangeable magnesium (E-Mg) were determined by ammonium acetate exchange-atomic absorption spectrophotometer. Soil physicochemical analyses were conducted according to the methods described by Lu .
Soil DNA was extracted from 2.5 g of flesh soil using the PowerSoil DNA Isolation Kit for Soil (Mobio Laboratories, Inc., Carlsbad, CA, USA). The V4–V5 fragment of the bacterial 16S rRNA genes was amplified with the primer pair 515F (5’-GTGCCAGCMGCCGCGGTAA-3’) and 907R (5’-CCGTCAATTCMTTTRAGTTT-3’) . The PCR amplification conditions were denaturation at 95 °C for 10 s, annealing at 55 °C for 30 s, and extension at 72 °C for 45 s. The 16S rRNA sequences were then conducted on the Illumina NovaSeq high-throughput sequencing platform by MAGIGENE (Guangdong, China). Forward and reverse sequences were spliced using the FLASH 1.2.11 software, and low-quality sequences (average quality score lower than 200_bp) and chimeras were removed. The remaining high-quality reads were aligned and clustered into operational taxonomic units (OTUs) with a similarity level of 97% using USEARCH software. The representative OTUs were then compared with the SILVA 132 16 S rRNA databases to determine the taxa of each sample. To eliminate potential bias caused by different sequencing depths, the OTU tables were rarefied to the minimum read number of all samples (each with 36,263 reads after quality control). Alpha diversity indices and beta diversity distance matrices were calculated using the QIIME software , based on the randomly sampled OTU tables with the same sequence depth. Phyla and classes with a relative abundance of ≥ 1% were defined as dominant phyla and class . The 16S rRNA gene sequences were deposited into the NCBI Sequence Read Archive database with the number PRJNA1031136.
One-way ANOVA and two-way ANOVA were conducted using SPSS 25.0 software to compare the differences in rice yield, soil properties, and alpha diversity among different treatments. Principal Component Analysis (PCA) and Analysis of Similarity (ANOSIM) were conducted with the R package “vegan” to assess the differences in soil bacterial community compositions. Redundancy analysis (RDA) using CANOCO 5.0 and Mantel test using R package “devtools” were used to evaluate correlations between bacterial community and soil factors. Structural equation modeling (SEM) was employed to analyze the potential direct and indirect effects of soil factors and microbial factors on rice yield caused by fertilization . The SEM analysis was conducted using the robust maximum likelihood evaluation method in AMOS 28.0 (AMOS IBM, USA) .
Co-occurrence networks were used to assess microbial complexity and identify potential keystone taxa. To avoid spurious correlations, soil bacteria OTUs with a relative abundance greater than 0.1% underwent Spearman correlation analysis, corrected using false discovery rate correction . The R package “ psych” was used to construct the correlation network, with correlation coefficients above 0.6 and p-values below 0.05 were regarded as elements of networks . Networks were then visualized using Gephi . The OTUs with the highest degree and highest closeness centrality were identified as keystone taxa , and the sum of these two values was transformed into a Z-score . Z-score values greater than 1.0 were selected as keystone taxa.
Rice yield and soil properties Generally, as the application of green manure increased, rice yield exhibited an upward trend (Fig. ). Compared to N 0 M 0 treatment, N 0 M 22.5 , N 0 M 45 , and N 0 M 67.5 significantly increased rice yields by 15.51 to 22.08%. NM 45 and NM 67.5 also significantly increased the rice yields by 9.81% and 10.17% compared to NM 0 . Moreover, the addition of N significantly increased rice yield by 21.84 to 35% compared to treatments without N addition. Two-way ANOVA revealed significant interactions between milk vetch (MV) and N fertilizer for most of the soil properties (Table ). Across all treatments, there was a significant increase in TN and E-Mg contents compared to N 0 M 0 , alongside a notable decrease in soil pH value. The soil available nutrients (AN, AK, and AP) increased with escalating green manure application in the group without N addition. In contrast, available nutrients (AN and AP) in the group with N addition were slightly higher than those without N addition, with significant differences observed in NM 22.5 and NM 45 treatments. Fertilization treatments did not significantly change the SOC and TP content. Microbial community characteristics under different treatments A total of 9 major phyla were identified as the dominant phyla, with the top five bacterial phyla being Proteobacteria (23.73–31.15%), Chloroflexi (17.33–25.43%), Nitrospirae (8.09–11.86%), Bacteroidetes (6.75–9.10%), and Acidobacteria (6.53– 7.86%; Fig. A). N 0 M 22.5 and N 0 M 67.5 significantly decreased the relative abundance of Proteobacteria by 21.60% and 16.33%, respectively, compared to N 0 M 0 . Similarly, N 0 M 45 and N 0 M 67.5 significantly decreased the relative abundance of Nitrospirae by 30.75% and 31.78%, respectively, compared to N 0 M 0 . Moreover, N 0 M 22.5 , N 0 M 45 , and N 0 M 67.5 significantly decreased the relative abundance of Firmicutes by 44.76 − 77.28%, respectively, compared to N 0 M 0 . In contrast, N 0 M 22.5 , N 0 M 45 and N 0 M 67.5 significantly increased the relative abundance of Chloroflexi , Planctomycetes and Verrucomicrobia by 18.79–46.70%, 51.06–81.27%, and 26.01–45.11%, respectively, compared with N 0 M 0 . Conversely, NM 22.5 and NM 67.5 significantly increased the relative abundance of Proteobacteria by 14.84% and 16.27%, while significantly decreasing the relative abundance of Chloroflexi by 9.43% and 14.76% respectively, compared to NM 0 . NM 45 and NM 67.5 also significantly increased the relative abundance of Bacteroidetes by 12.00% and 13.38%, respectively, compared with NM 0 . At the class level, 12 major classes were identified as dominant class (Fig. B). All treatments significantly increased the relative abundance of Anaerolineae and Nitrospirae_4-29-1 , compared to N 0 M 0 , while they significantly decreased the relative abundance of Deltaproteobacteria and Thermodesulfovibrionia . Bacterial community diversity and structure under different treatments The alpha and beta diversities of soil bacteria are depicted in Fig. . Compared to the N 0 M 0 treatment, the Chao1 index values were significantly increased in the group without N addition (N 0 M 22.5 , N 0 M 45 and N 0 M 67.5 ), while there was no significant difference in the group with N addition (NM 22.5 , NM 45 and NM 67.5 ) (Fig. A). Additionally, the highest Shannon index was observed in N 0 M 45 , while the lowest values were observed in NM 45 . Similarly, the observed OTUs showed consistent results with the Shannon index. Fertilization regimes significantly affected the soil bacterial community structure. The PCA results revealed that the soil bacterial community treated with N 0 M 0 treatment was significantly separated from those treated with N 0 M 45 and N 0 M 67.5 treatments, whereas overlap was observed among N 0 M 22.5 , NM 0 , NM 22.5 , NM 45 and NM 67.5 treatments (Fig. B). Significant differences in the community structure of soil bacteria were evident in the group without N addition (Fig. C), while the community structure was similar in the group with N addition (Fig. D). Co-occurrence network and keystone taxa under different treatments Network analysis was used to reveal the interactions of soil bacteria across varied fertilization treatments. As green manure application increased, species transfers between modules occurred, leading to enhanced stability in the co-occurrence network of soil bacteria, irrespective of N fertilizer application (Fig. ). Moreover, in the absence of N addition, the proportion of negative correlation decreased with escalating green manure input, while it tended to increase in the presence of N addition (Table ). This suggests that green manure primarily exerted a synergistic effect on soil bacterial interaction, whereas competition became the dominant effect following N fertilizer addition. Eight treatments were categorized into two groups to identify keystone taxa (Fig. ). In the group without N addition, keystone taxa included Latescibacteria (OTU91, 431), Anaerolineaceae (OTU19, 20, 93) from Chloroflexi , Betaproteobacteriales (OTU125, 110, 63) and Ectothiorhodospirales (OTU28) from Gammaproteobacteria , Myxococcales (OTU111,70), Desulfarculales (OTU101) from Deltaproteobacteria , Rhizobiales of Alphaproteobacteria , Pla4 lineage (OTU51) of Planctomycetes , Gemmatimonadales of Gemmatimonadetes , and Subgroup (4, 5, 6, 11, 22) (OTU (42, 81, (31, 47), 55, 61) from Acidobacteria (Table ). In the group with N addition, keystone taxa included Anaerolineaceae (OTU24, 3193, 89, 164, 510, 8, 135, 93) from Chloroflexi , Betaproteobacteriales (OTU110, 69), Methylococcales (OTU71) from Gammaproteobacteria , Myxococcales (OTU43,112), Desulfobacterales (OTU122), and Desulfuromonadales (OTU64) from Deltaproteobacteria , Sphingobacteriales (OTU29) and Chitinophagales (OTU80) from Bacteroidetes , Nitrospirae_4-29-1 (OTU6) and Chthoniobacterales from Verrucomicrobi (Table ). Relationship between soil bacterial community and soil physicochemical properties The RDA analysis showed that environmental variables explained 71.42% and 50.15% of variations in bacterial communities in the group without N addition and the group with N addition, respectively (Fig. A, B). The pH (F = 8.2, p = 0.006), E-Mg (F = 6.8, p = 0.004), TN (F = 4.9, p = 0.014), AP (F = 3.1, p = 0.022), and SOC (F = 2.8, p = 0.046) significantly effected soil bacterial community structure in the group without N addition (Fig. A). Only pH (F = 2.1, p = 0.006) and E-Mg (F = 2.0, p = 0.038) significantly affected soil bacterial community structure in the group with N addition (Fig. B). Mantel test analysis suggested that soil bacterial community composition was significantly affected by soil environmental factors, including soil pH, TN, SOC, and E-Mg ( p < 0.05), while there was no significant effect of environmental factors on soil bacterial community after the addition of N fertilizer (Fig. C). Furthermore, in the group without N addition, keystone taxa were positively correlated with soil environmental factors such as pH, TN, AN, AP, SOC and E-Mg. ( p < 0.05). In the group with N addition, keystone taxa were only positively correlated with soil TN ( p < 0.05). Relationship between rice yield and abiotic and biological factors The SEM analysis further confirmed that fertilization regimes had a direct impact on rice yield (Fig. ). In the group without N addition, the application of green manure alone significantly altered soil TN and pH, subsequently influencing the overall bacterial community and their interaction. This suggests that fertilization affected the soil environment, subsequently shaping the microbial community. However, the influence of the microbial community on rice yield was not statistically significant (Fig. A). Conversely, in the group with N addition, fertilization did not significantly affect soil TN, SOC, and pH. Still, the overall microbial community, diversity, and their interaction significantly impacted rice yield, displaying an opposing effect compared to green manure application alone. These findings indicated that the application of green manure combined with N fertilizer altered the microbial community and rice yield. (Fig. B).
Generally, as the application of green manure increased, rice yield exhibited an upward trend (Fig. ). Compared to N 0 M 0 treatment, N 0 M 22.5 , N 0 M 45 , and N 0 M 67.5 significantly increased rice yields by 15.51 to 22.08%. NM 45 and NM 67.5 also significantly increased the rice yields by 9.81% and 10.17% compared to NM 0 . Moreover, the addition of N significantly increased rice yield by 21.84 to 35% compared to treatments without N addition. Two-way ANOVA revealed significant interactions between milk vetch (MV) and N fertilizer for most of the soil properties (Table ). Across all treatments, there was a significant increase in TN and E-Mg contents compared to N 0 M 0 , alongside a notable decrease in soil pH value. The soil available nutrients (AN, AK, and AP) increased with escalating green manure application in the group without N addition. In contrast, available nutrients (AN and AP) in the group with N addition were slightly higher than those without N addition, with significant differences observed in NM 22.5 and NM 45 treatments. Fertilization treatments did not significantly change the SOC and TP content.
A total of 9 major phyla were identified as the dominant phyla, with the top five bacterial phyla being Proteobacteria (23.73–31.15%), Chloroflexi (17.33–25.43%), Nitrospirae (8.09–11.86%), Bacteroidetes (6.75–9.10%), and Acidobacteria (6.53– 7.86%; Fig. A). N 0 M 22.5 and N 0 M 67.5 significantly decreased the relative abundance of Proteobacteria by 21.60% and 16.33%, respectively, compared to N 0 M 0 . Similarly, N 0 M 45 and N 0 M 67.5 significantly decreased the relative abundance of Nitrospirae by 30.75% and 31.78%, respectively, compared to N 0 M 0 . Moreover, N 0 M 22.5 , N 0 M 45 , and N 0 M 67.5 significantly decreased the relative abundance of Firmicutes by 44.76 − 77.28%, respectively, compared to N 0 M 0 . In contrast, N 0 M 22.5 , N 0 M 45 and N 0 M 67.5 significantly increased the relative abundance of Chloroflexi , Planctomycetes and Verrucomicrobia by 18.79–46.70%, 51.06–81.27%, and 26.01–45.11%, respectively, compared with N 0 M 0 . Conversely, NM 22.5 and NM 67.5 significantly increased the relative abundance of Proteobacteria by 14.84% and 16.27%, while significantly decreasing the relative abundance of Chloroflexi by 9.43% and 14.76% respectively, compared to NM 0 . NM 45 and NM 67.5 also significantly increased the relative abundance of Bacteroidetes by 12.00% and 13.38%, respectively, compared with NM 0 . At the class level, 12 major classes were identified as dominant class (Fig. B). All treatments significantly increased the relative abundance of Anaerolineae and Nitrospirae_4-29-1 , compared to N 0 M 0 , while they significantly decreased the relative abundance of Deltaproteobacteria and Thermodesulfovibrionia .
The alpha and beta diversities of soil bacteria are depicted in Fig. . Compared to the N 0 M 0 treatment, the Chao1 index values were significantly increased in the group without N addition (N 0 M 22.5 , N 0 M 45 and N 0 M 67.5 ), while there was no significant difference in the group with N addition (NM 22.5 , NM 45 and NM 67.5 ) (Fig. A). Additionally, the highest Shannon index was observed in N 0 M 45 , while the lowest values were observed in NM 45 . Similarly, the observed OTUs showed consistent results with the Shannon index. Fertilization regimes significantly affected the soil bacterial community structure. The PCA results revealed that the soil bacterial community treated with N 0 M 0 treatment was significantly separated from those treated with N 0 M 45 and N 0 M 67.5 treatments, whereas overlap was observed among N 0 M 22.5 , NM 0 , NM 22.5 , NM 45 and NM 67.5 treatments (Fig. B). Significant differences in the community structure of soil bacteria were evident in the group without N addition (Fig. C), while the community structure was similar in the group with N addition (Fig. D).
Network analysis was used to reveal the interactions of soil bacteria across varied fertilization treatments. As green manure application increased, species transfers between modules occurred, leading to enhanced stability in the co-occurrence network of soil bacteria, irrespective of N fertilizer application (Fig. ). Moreover, in the absence of N addition, the proportion of negative correlation decreased with escalating green manure input, while it tended to increase in the presence of N addition (Table ). This suggests that green manure primarily exerted a synergistic effect on soil bacterial interaction, whereas competition became the dominant effect following N fertilizer addition. Eight treatments were categorized into two groups to identify keystone taxa (Fig. ). In the group without N addition, keystone taxa included Latescibacteria (OTU91, 431), Anaerolineaceae (OTU19, 20, 93) from Chloroflexi , Betaproteobacteriales (OTU125, 110, 63) and Ectothiorhodospirales (OTU28) from Gammaproteobacteria , Myxococcales (OTU111,70), Desulfarculales (OTU101) from Deltaproteobacteria , Rhizobiales of Alphaproteobacteria , Pla4 lineage (OTU51) of Planctomycetes , Gemmatimonadales of Gemmatimonadetes , and Subgroup (4, 5, 6, 11, 22) (OTU (42, 81, (31, 47), 55, 61) from Acidobacteria (Table ). In the group with N addition, keystone taxa included Anaerolineaceae (OTU24, 3193, 89, 164, 510, 8, 135, 93) from Chloroflexi , Betaproteobacteriales (OTU110, 69), Methylococcales (OTU71) from Gammaproteobacteria , Myxococcales (OTU43,112), Desulfobacterales (OTU122), and Desulfuromonadales (OTU64) from Deltaproteobacteria , Sphingobacteriales (OTU29) and Chitinophagales (OTU80) from Bacteroidetes , Nitrospirae_4-29-1 (OTU6) and Chthoniobacterales from Verrucomicrobi (Table ).
The RDA analysis showed that environmental variables explained 71.42% and 50.15% of variations in bacterial communities in the group without N addition and the group with N addition, respectively (Fig. A, B). The pH (F = 8.2, p = 0.006), E-Mg (F = 6.8, p = 0.004), TN (F = 4.9, p = 0.014), AP (F = 3.1, p = 0.022), and SOC (F = 2.8, p = 0.046) significantly effected soil bacterial community structure in the group without N addition (Fig. A). Only pH (F = 2.1, p = 0.006) and E-Mg (F = 2.0, p = 0.038) significantly affected soil bacterial community structure in the group with N addition (Fig. B). Mantel test analysis suggested that soil bacterial community composition was significantly affected by soil environmental factors, including soil pH, TN, SOC, and E-Mg ( p < 0.05), while there was no significant effect of environmental factors on soil bacterial community after the addition of N fertilizer (Fig. C). Furthermore, in the group without N addition, keystone taxa were positively correlated with soil environmental factors such as pH, TN, AN, AP, SOC and E-Mg. ( p < 0.05). In the group with N addition, keystone taxa were only positively correlated with soil TN ( p < 0.05).
The SEM analysis further confirmed that fertilization regimes had a direct impact on rice yield (Fig. ). In the group without N addition, the application of green manure alone significantly altered soil TN and pH, subsequently influencing the overall bacterial community and their interaction. This suggests that fertilization affected the soil environment, subsequently shaping the microbial community. However, the influence of the microbial community on rice yield was not statistically significant (Fig. A). Conversely, in the group with N addition, fertilization did not significantly affect soil TN, SOC, and pH. Still, the overall microbial community, diversity, and their interaction significantly impacted rice yield, displaying an opposing effect compared to green manure application alone. These findings indicated that the application of green manure combined with N fertilizer altered the microbial community and rice yield. (Fig. B).
The prolonged use of chemical fertilizers can detrimentally affect the soil environment and delicate agricultural ecosystems in karst regions . To mitigate these harmful effects and increase rice yield, reducing the input of chemical fertilizers and exploring alternative options such as green manure are necessary. Green manure has been proven effective in maintaining soil health and increasing rice yield . In this study, we observed a significant increase in rice yield ranging from 15.51 to 22.08% with green manure alone, compared to N 0 M 0 , and further improvement of 21.84 to 35% following N fertilizer addition (Fig. ). This finding is consistent with the previous studies . The enhanced rice yield may be attributed to two main factors: (i) improvement in soil fertility and nutrient availability facilitated by the decomposition of green manure ; and (ii) alterations in the soil microbial community structure, especially specific functional microbial communities that improve soil microenvironment and nutrient availability, consequently boosting rice yield . Effects of different amounts of green manure on soil physicochemical properties Research has shown that incorporating green manure into paddy field enhanced rice growth and improved its nutrient absorption and utilization efficiency, ultimately increasing rice yield . In our study, escalating green manure input increased soil TN, AN, AP, and AK contents in the group without N addition, a trend not observed in the group with N addition (Table ). This rise can be attributed to N release from the decomposition of green manure, which included fixed atmospheric and vegetal N, thereby significantly augmenting N input into the soil . However, the TN and AN showed a decrease with escalating green manure input, consistent with previous findings indicating that excessive N application reduced soil TN and AN contents, suggesting that NM 22.5 can mitigate N loss . High levels of Ca 2+ combined with H 2 PO 4 - to form insoluble phosphate in karst soil, limiting phosphorus available for plant uptake . However, the content of AP increased with green manure input, and this effect was further amplified by N addition (Table ). The application of green manure, especially when combined with N fertilizer, significantly enhanced the abundance and activity of phosphate-solubilizing bacteria. This facilitated the conversion of inorganic phosphorus into organic forms in the soil, thereby substantially increasing available phosphorus content . Additionally, the rhizosphere of green manure released substantial organic acids, interacting with ligand groups on mineral surfaces, thereby enhancing soil potassium availability . Yu et al. reported that an appropriate N application rate promoted phosphorus and potassium uptake in rice, wheres excessive N application diminished this ability . Soil AK and AP were notably reduced in NM 22.5 and NM 45 treatments, indicating that a lower amount of green manure combined with N fertilizer enhances the efficient utilization of AK and AP by rice. We observed no significant change in SOC ( p > 0.05), indicating that green manure retained stable carbon levels in the soil. This stability could be attributed to the addition of exogenous organic matter and N fertilizer, which stimulated soil microorganisms to mineralize SOC, with recalcitrant components stably stored within soil aggregates in the paddy field . Moreover, the decomposition of green manure and released of N fertilizer resulted in a reduction in soil pH. This acidification process can counterbalance the alkaline nature of lime soil, directly or indirectly impacting the activities and diversity of microbial communities . Effects of different amounts of green manure on soil bacterial community structure, co-occurrence network and keystone taxa The distribution pattern of soil bacteria varies under different fertilization regimes due to differences in physicochemical properties . Compared to the application of green manure alone, soil bacterial diversity decreased after the addition of N fertilizer (Fig. A), consistent with numerous previous studies linking this decrease to soil pH reduction, which strongly influences microbial community and diversity . Microbial growth is generally constrained by resource availability , meaning that the ability of soil microorganisms to access nutrients depends partly on the choice of fertilization methods. An extreme imbalance between available resources and microbial nutrient requirements can lead to changes in soil bacterial communities and their survival strategies . In this study, we observed that the relative abundance of Chloroflexi , Planctomycetes and Verrucomicrobia increased in the group without N addition (Fig. A). These bacteria are classified as oligotrophs, capable of thriving under low substrate concentrations , suggesting that green manure decomposition provided low-quality nutrients for paddy fields. Conversely, the addition of exogenous inorganic N rapidly alters the soil nutrient status, resulting in a lower C/N ratio compared to green manure alone . Proteobacteria and Bacteroidetes are considered potential copiotrophs , increased in the group with N addition (Fig. A), indicating their competitive advantages in nutrient-rich environments . Therefore, we conclude that differences in fertilization affect soil bacterial community composition and their survival strategies. PCA analysis revealed significant differences in soil bacterial communities under the application of green manure alone. However, a considerable overlap was observed after the addition of N fertilizer (Fig. B-D), indicating that N fertilizer disrupted the stability of the original bacterial community compared to the application of green manure alone, leading to homogenization of the soil bacterial community. Soil pH exhibited a strong correlation with soil bacterial community in all treatment (Fig. A, B), largely limiting the niche range for the soil bacterial community . The SOC, TN, AN, and AP were significantly correlated with soil bacterial community in the group without N addition (Fig. A). Although the nutrients released by green manure decay provided the necessary C, N, P, and K sources for soil microorganisms and rice, the available nutrients were preferentially absorbed and utilized by rice , making these environmental factors key constraints on the changes in soil bacterial communities. Microorganisms respond quickly to drastic environmental changes, adjusting their communities and ecological functions to maintain ecosystem stability . In this study, the addition of N fertilizer caused significant disturbance to the soil environment and resulted in the aggregation of soil bacterial community (Fig. D). Moreover, a large amount of N input disrupted the original nutrient distribution pattern and caused a priming effect, leading to a lower C/N ratio in the soil, resulting in changes in life history strategies . Furthermore, the N addition could enhance the function of microbial taxa that specifically decompose organic compounds, facilitating material transformation and nutrient availability . Therefore, further exploration of the effects of key functional species on nutrient turnover under different fertilization treatments is necessary. Effects of different amounts of green manure on soil bacterial co-occurrence network and keystone taxa Fertilization practices have been shown to alter the stability of microbial networks and the function of keystone taxa , which are directly related to nutrient cycling in paddy ecosystems . In this study, we found that the addition of N fertilizer reduced the positive correlation ratio and complexity of soil bacterial network compared with the application of green manure alone (Fig. ). This suggested a synergistic effect of soil bacteria under the action of green manure alone, while a competitive effect occurred with the addition of N fertilizer . Yuan et al. reported that network complexity and stability strongly influence microbial community structure and ecosystem functional process. The observed decrease in complexity and stability of the bacterial community may lead to changes in keystone taxa and their function within the network patterns under different fertilization regimes. Anaerolineales , Myxococcales , Desulfobacterales , Pirellulales , and Betaproteobacteriales are commonly found in the anaerobic environment of rice paddy field . Specially, Rhizobiales coexist with leguminous plants and facilitate soil fertility through N fixation Latescibacteria play a significant role in decomposing green manure due to their strong saprophytic features, enabling them to degrade polysaccharides, lipids, and proteins in bacterial, plant, and fungal materials . The subgroup (4,5,6,11,22) of Acidobacteria , classified as oligotrophs, efficiently utilize limited nitrogen and unstable carbon sources, benefiting from the application of green manure for R strategy propagation , and exhibit a high growth rate in response to environmental disturbances . The N-fixed bacteria coexisting with leguminous green manure and other keystone taxa appeared to be important in carbon compound degradation, providing a continuous N supply from atmospheric N and the decomposition of green manure for rice. In contrast, Sphingobacteriales and Chitinophagalesy , classified as copiotrophs, thrive on unstable carbon and abundant nitrogen sources . Nitrospirae_4–29 and Chthoniobacterales of Verrucomicrobia appear to be important in nitrification and denitrification, respectively , indicating that keystone species under the combination of green manure and chemical fertilizer may impact the N transformation process. Mantel’s test analysis showed that a strong correlation between keystone taxa and TN content (Fig. C), indicating that the keystone species may positively affect the N use efficiency of rice and significantly increased rice yield . The addition of N fertilizer promoted the aggregation of specific microorganisms and the secretion of hydrolase enzymes, facilitating soil N rotation and enhancing the nitrogen cycling process . Moreover, the combination of green manure with N fertilizer was beneficial in increasing the number of phosphorus-lipolytic bacteria, greatly improving the availability of phosphorus in the soil . Overall, the changes in the microbial network and the function of keystone taxa in different fertilization regimes might have strong effects on nutrient transformation process and, consequently, nutrient use efficiency in rice cultivation. Response of biological and abiotic factors on rice yield The impact of different fertilization regimes on rice yield is substantial, and understanding the microbial mechanisms involved is essential for unraveling the complexity of paddy ecosystems. SEM analysis revealed that the application of green manure significant effected soil TN and pH, consequently shaping soil microbial communities and the co-occurrence network (Fig. A). However, this did not result in a significant improvement in rice yield, primarily due to the limited availability of nutrients, which left both soil microorganisms and rice in a state of nutrients starvation . In contrast, when green manure was combined with N fertilizer, there was no significant effect on TN, SOC, and pH, but the soil bacterial community significantly influenced rice yield (Fig. B). This varied that N fertilizer improved rice yield by modulating the bacterial community and keystone taxa, which in turn regulated N transformation processes and indirectly promoted nutrient absorption by rice. However, it’s worth noting that while green manure plays a positive role in increasing rice yield, excessive input of green manure provides little additional benefit to rice enhancement. Therefore, the judicious addition of chemical fertilizer on top of green manure incorporation can effectively boost rice yield. These research findings are not only crucial for the sustainability of agricultural production but also deepen our understanding of the relationship between soil microbes and rice growth.
Research has shown that incorporating green manure into paddy field enhanced rice growth and improved its nutrient absorption and utilization efficiency, ultimately increasing rice yield . In our study, escalating green manure input increased soil TN, AN, AP, and AK contents in the group without N addition, a trend not observed in the group with N addition (Table ). This rise can be attributed to N release from the decomposition of green manure, which included fixed atmospheric and vegetal N, thereby significantly augmenting N input into the soil . However, the TN and AN showed a decrease with escalating green manure input, consistent with previous findings indicating that excessive N application reduced soil TN and AN contents, suggesting that NM 22.5 can mitigate N loss . High levels of Ca 2+ combined with H 2 PO 4 - to form insoluble phosphate in karst soil, limiting phosphorus available for plant uptake . However, the content of AP increased with green manure input, and this effect was further amplified by N addition (Table ). The application of green manure, especially when combined with N fertilizer, significantly enhanced the abundance and activity of phosphate-solubilizing bacteria. This facilitated the conversion of inorganic phosphorus into organic forms in the soil, thereby substantially increasing available phosphorus content . Additionally, the rhizosphere of green manure released substantial organic acids, interacting with ligand groups on mineral surfaces, thereby enhancing soil potassium availability . Yu et al. reported that an appropriate N application rate promoted phosphorus and potassium uptake in rice, wheres excessive N application diminished this ability . Soil AK and AP were notably reduced in NM 22.5 and NM 45 treatments, indicating that a lower amount of green manure combined with N fertilizer enhances the efficient utilization of AK and AP by rice. We observed no significant change in SOC ( p > 0.05), indicating that green manure retained stable carbon levels in the soil. This stability could be attributed to the addition of exogenous organic matter and N fertilizer, which stimulated soil microorganisms to mineralize SOC, with recalcitrant components stably stored within soil aggregates in the paddy field . Moreover, the decomposition of green manure and released of N fertilizer resulted in a reduction in soil pH. This acidification process can counterbalance the alkaline nature of lime soil, directly or indirectly impacting the activities and diversity of microbial communities .
The distribution pattern of soil bacteria varies under different fertilization regimes due to differences in physicochemical properties . Compared to the application of green manure alone, soil bacterial diversity decreased after the addition of N fertilizer (Fig. A), consistent with numerous previous studies linking this decrease to soil pH reduction, which strongly influences microbial community and diversity . Microbial growth is generally constrained by resource availability , meaning that the ability of soil microorganisms to access nutrients depends partly on the choice of fertilization methods. An extreme imbalance between available resources and microbial nutrient requirements can lead to changes in soil bacterial communities and their survival strategies . In this study, we observed that the relative abundance of Chloroflexi , Planctomycetes and Verrucomicrobia increased in the group without N addition (Fig. A). These bacteria are classified as oligotrophs, capable of thriving under low substrate concentrations , suggesting that green manure decomposition provided low-quality nutrients for paddy fields. Conversely, the addition of exogenous inorganic N rapidly alters the soil nutrient status, resulting in a lower C/N ratio compared to green manure alone . Proteobacteria and Bacteroidetes are considered potential copiotrophs , increased in the group with N addition (Fig. A), indicating their competitive advantages in nutrient-rich environments . Therefore, we conclude that differences in fertilization affect soil bacterial community composition and their survival strategies. PCA analysis revealed significant differences in soil bacterial communities under the application of green manure alone. However, a considerable overlap was observed after the addition of N fertilizer (Fig. B-D), indicating that N fertilizer disrupted the stability of the original bacterial community compared to the application of green manure alone, leading to homogenization of the soil bacterial community. Soil pH exhibited a strong correlation with soil bacterial community in all treatment (Fig. A, B), largely limiting the niche range for the soil bacterial community . The SOC, TN, AN, and AP were significantly correlated with soil bacterial community in the group without N addition (Fig. A). Although the nutrients released by green manure decay provided the necessary C, N, P, and K sources for soil microorganisms and rice, the available nutrients were preferentially absorbed and utilized by rice , making these environmental factors key constraints on the changes in soil bacterial communities. Microorganisms respond quickly to drastic environmental changes, adjusting their communities and ecological functions to maintain ecosystem stability . In this study, the addition of N fertilizer caused significant disturbance to the soil environment and resulted in the aggregation of soil bacterial community (Fig. D). Moreover, a large amount of N input disrupted the original nutrient distribution pattern and caused a priming effect, leading to a lower C/N ratio in the soil, resulting in changes in life history strategies . Furthermore, the N addition could enhance the function of microbial taxa that specifically decompose organic compounds, facilitating material transformation and nutrient availability . Therefore, further exploration of the effects of key functional species on nutrient turnover under different fertilization treatments is necessary.
Fertilization practices have been shown to alter the stability of microbial networks and the function of keystone taxa , which are directly related to nutrient cycling in paddy ecosystems . In this study, we found that the addition of N fertilizer reduced the positive correlation ratio and complexity of soil bacterial network compared with the application of green manure alone (Fig. ). This suggested a synergistic effect of soil bacteria under the action of green manure alone, while a competitive effect occurred with the addition of N fertilizer . Yuan et al. reported that network complexity and stability strongly influence microbial community structure and ecosystem functional process. The observed decrease in complexity and stability of the bacterial community may lead to changes in keystone taxa and their function within the network patterns under different fertilization regimes. Anaerolineales , Myxococcales , Desulfobacterales , Pirellulales , and Betaproteobacteriales are commonly found in the anaerobic environment of rice paddy field . Specially, Rhizobiales coexist with leguminous plants and facilitate soil fertility through N fixation Latescibacteria play a significant role in decomposing green manure due to their strong saprophytic features, enabling them to degrade polysaccharides, lipids, and proteins in bacterial, plant, and fungal materials . The subgroup (4,5,6,11,22) of Acidobacteria , classified as oligotrophs, efficiently utilize limited nitrogen and unstable carbon sources, benefiting from the application of green manure for R strategy propagation , and exhibit a high growth rate in response to environmental disturbances . The N-fixed bacteria coexisting with leguminous green manure and other keystone taxa appeared to be important in carbon compound degradation, providing a continuous N supply from atmospheric N and the decomposition of green manure for rice. In contrast, Sphingobacteriales and Chitinophagalesy , classified as copiotrophs, thrive on unstable carbon and abundant nitrogen sources . Nitrospirae_4–29 and Chthoniobacterales of Verrucomicrobia appear to be important in nitrification and denitrification, respectively , indicating that keystone species under the combination of green manure and chemical fertilizer may impact the N transformation process. Mantel’s test analysis showed that a strong correlation between keystone taxa and TN content (Fig. C), indicating that the keystone species may positively affect the N use efficiency of rice and significantly increased rice yield . The addition of N fertilizer promoted the aggregation of specific microorganisms and the secretion of hydrolase enzymes, facilitating soil N rotation and enhancing the nitrogen cycling process . Moreover, the combination of green manure with N fertilizer was beneficial in increasing the number of phosphorus-lipolytic bacteria, greatly improving the availability of phosphorus in the soil . Overall, the changes in the microbial network and the function of keystone taxa in different fertilization regimes might have strong effects on nutrient transformation process and, consequently, nutrient use efficiency in rice cultivation.
The impact of different fertilization regimes on rice yield is substantial, and understanding the microbial mechanisms involved is essential for unraveling the complexity of paddy ecosystems. SEM analysis revealed that the application of green manure significant effected soil TN and pH, consequently shaping soil microbial communities and the co-occurrence network (Fig. A). However, this did not result in a significant improvement in rice yield, primarily due to the limited availability of nutrients, which left both soil microorganisms and rice in a state of nutrients starvation . In contrast, when green manure was combined with N fertilizer, there was no significant effect on TN, SOC, and pH, but the soil bacterial community significantly influenced rice yield (Fig. B). This varied that N fertilizer improved rice yield by modulating the bacterial community and keystone taxa, which in turn regulated N transformation processes and indirectly promoted nutrient absorption by rice. However, it’s worth noting that while green manure plays a positive role in increasing rice yield, excessive input of green manure provides little additional benefit to rice enhancement. Therefore, the judicious addition of chemical fertilizer on top of green manure incorporation can effectively boost rice yield. These research findings are not only crucial for the sustainability of agricultural production but also deepen our understanding of the relationship between soil microbes and rice growth.
Compared to the N 0 M 0 treatment, the application of varying amounts of green manure combined with N fertilizer altered the soil bacterial community and significantly enhanced rice yield in karst paddy areas. The application of green manure alone provided a pristine nutrient source for rice through self-decomposition and symbiosis with nitrogen-fixing bacteria, leading to increases in soil TN, AN, AK, and AP. Conversely, the application of a large amount of N fertilizer reduced the soil C: N ratio, triggering destabilization of the native soil bacterial community. Additionally, keystone taxa shifted from their original roles in N-fixing ( Rhizobiales ) and carbon-degradation ( Latescibacteria and subgroups of Acidobacteria ) to functions associated with carbon degradation ( Sphingobacteriales and Chitinophagalesy ), nitrification ( Nitrospirae_4–29 ), and denitrification (Chthoniobacterales). This alteration in soil community composition and function likely plays a crucial role in enhancing nutrient utilization efficiency in rice, consequently significantly increasing rice yield. The study concludes by advocating for future investigations to focus on specific core taxa to gain a deeper understanding of the roles of soil microorganisms and their metabolic activities in influencing soil properties and rice productivity.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Molecular diagnostics and therapies for gastrointestinal tumors: a real-world experience | 4b1c5c7f-1384-41e3-bad9-26cfed87aeb9 | 9293869 | Pathology[mh] | In gastrointestinal (GI) oncology, genetic alterations can serve both as negative, as well as positive predictive biomarkers. For instance, since the concept that KRAS mutant colorectal cancer patients do not benefit from the addition of EGFR antibodies to the chemotherapy backbone was introduced, determination of RAS status has become part of the routine diagnostic workup in patients with advanced colorectal cancer (Di Fiore et al. ; Karapetis et al. ). In contrast, while there is accumulating evidence that microsatellite instable (MSI) GI cancers do not derive significant benefit from perioperative chemotherapy, several studies confirmed a strong positive correlation between MSI high status and response to immunotherapies (Andre et al. ; Le et al. ; Pietrantonio et al. ; Seymour and Morton ; Smyth et al. ). Genetic alterations can also create distinct vulnerabilities and serve as targets for precision oncology approaches. With an expanding repertoire of targeted agents and clinical data on precision oncology in solid tumors, molecular diagnostics using high-throughput sequencing technologies are becoming increasingly important and are integrated in routine clinical diagnostics. Especially in higher lines of therapy and in malignancies with limited therapeutic options, panel sequencing can identify molecular targets for therapy and help to ensure, that the full spectrum of clinically meaningful treatment options is offered to patients. However, access to targeted drugs is often hampered by the lack of approval by the European authorities, namely the European Medicines Agency (EMA), and without this approval, treating physicians need to file for cost coverage by the health insurance companies on an individual basis. In the following, we report our experiences with molecular diagnostics in GI malignancies, outline the formal requirements and temporal processes associated with approval of cost coverage by German health insurance companies, and assess the clinical response we observed in patients that received individualized therapies in our center. Molecular diagnostics in clinical routine Targeted gene panels cover a distinct set of regions within the genome and serve as powerful and cost-effective tools to identify therapeutically relevant alterations in solid malignancies. The size of the panels varies considerably, ranging from only few genes with direct therapeutic implications, to larger panels that detect also more rare genetic variants, or recurrent alterations without direct therapeutic implications. Between March 2019 and April 2020, 118 patients received tumor-genetic testing via panel sequencing in our GI oncology unit. The most frequently applied panel was the Oncomine Comprehensive Cancer Assay v3 that was performed in-house and covers 161 of the most relevant cancer driver genes based on an amplicon approach (109/118). In contrast, the Foundation One CDx assay is a hybrid capture approach and was performed in a small subset of patients (9/118) by an affiliated external service provider. The cohort that received genetic testing beyond the current standards (such as determination of KRAS status in left-sided colorectal cancer) was heavily biased towards cholangiocellular carcinomas (48.3%), followed by colon carcinomas (16.9%), pancreatic carcinomas (14.4%) and gastric carcinomas (7.6%) (Fig. a). The mean duration from material submission to receipt of the results was 34.5 days, with longer periods often caused by delayed transfer of tumor samples from external pathologies. The quantity or quality of the tissue was insufficient for sequencing in 13 cases, leading to the subsequent exclusion of these samples from the analysis. In five cases, a re-biopsy was performed. In 101/118 cases at least one genetic alteration was detected, whereas no alterations were reported in 17 samples. Seven out of these 17 cases were re-sequenced using the FoundationOne CDx assay, which led to the detection of at least one alteration in all cases. In two patients, these results were of direct therapeutic significance due to detection of FGFR2 fusions. The most frequently detected genetic alterations were detected in TP53 (34.5%), followed by KRAS (31%), IDH (7%), FGFR2 (7%), BRAF (7%) and ERBB2/Her2 (5.3%) (Fig. b). The number of mutations varied between one and nine, with an average of 2.78 genetic aberrations per patient. In 22/113 cases, tier I lesions as defined by the ESMO Scale of Clinical Actionability for molecular Targets (ESCAT) were identified [evidence level of ESCAT IA ( n = 12/29), IB ( n = 7/29) and IC ( n = 3/29)] (Mateo et al. ) and in seven cases, the evidence reached an ESCAT level of II or III. Cost coverage application to the health insurance companies A cohort of 53 patients with actionable lesions was identified via in-house panel sequencing ( n = 34) or, especially in patients with colon cancer, by targeted sequencing of hotspot regions (such as BRAF ) as well as testing for microsatellite stability or PDL-1 expression ( n = 19) (Fig. ). Baseline characteristics of all patients are presented in supplementary table 1. While 43 applications for cost coverage for individualized treatments were submitted to the health insurance companies, the remaining ten patients received an FGFR Inhibitor in the context of a compassionate use program. With 36% each, the most frequent disease entities were cholangiocellular carcinoma and colorectal carcinoma (Fig. a). In part, patients were heavily pretreated with a median of three prior therapies (mean 2.78). The most frequent actionable lesions were activating BRAF mutations (V600E, n = 14; D594G, n = 2; 30.2%, 14/16 occurred in patients with CRC), activating FGFR2 mutations ( n = 2) or FGFR2 fusions ( n = 8) (18.8%, all of which occurred in patients with CCA), and microsatellite instability/dMMR (17%), as assessed by PCR and immunohistochemistry (Fig. b, c). Cost coverage requests were based on proof of concept from phase III (69.8%) and phase II trials (30.2%) (Suppl. Table 2). 79% ( n = 34) of the approved treatments were categorized as ESCAT level I (IA or IC), 21% of the applications were categorized into a lower ESCAT level. 26 applications (60.5%) were approved by the health insurance companies upon the initial request. In twelve cases (27.9%) the first application was rejected (seven with ESCAT IA and five with a lower ESCAT level). Consequently, seven patients filed an appeal, which was granted in three cases (Fig. a). In the rejection letters, an alternative treatment was usually suggested, which, however, we had deemed either less promising or too toxic for the patient. In some cases, recommendations were given that were clearly not indicated, such as the administration of anti-EGFR substances in KRAS -mutated colorectal carcinoma. The timeframe required by the insurance agencies to process and respond to the requests varied profoundly. On average, the median duration from application to first feedback from the health insurance company regarding acceptance or rejection was 31 days (ranging between 4 and 79 days) (Fig. b). According to section 13 paragraph 3a Volume V of the German Social Insurance Code, the legal period for processing an application is 3 weeks, with a maximum extension to 5 weeks if a medical expert opinion is required. In 11 of 43 cases (26%) the five-week cutoff was exceeded, which, in some cases, significantly delayed therapy initiation. Seven patients died before ( n = 4) or shortly after ( n = 3) receiving confirmation of cost coverage and prior to initiation of the targeted therapy. In total, a median of 75 days elapsed from the time of initiation of molecular diagnostics, to the start of molecular therapy (Fig. c). In particular, there were significant delays due to tissue logistics (transfer from external pathologies), and lengthy processing of applications by the health insurance companies. In individual cases, this led to a cumulative delay in therapy initiation of up to 6 months. Individualized tumor therapy for gastrointestinal tumors Individualized treatments based on molecular diagnostics were initiated in 35 patients between March 2019 and April 2020, either following approval by the health insurance company ( n = 25), or in the context of a compassionate use program ( n = 10). The most frequent therapies were the combination of cetuximab, encorafenib and binimetinib according to the BEACON trial ( n = 8), which has by now received EMA approval (Kopetz et al. ). Ten patients with intrahepatic cholangiocarcinoma and FGFR2 alterations were treated with an FGFR inhibitor (derazantinib or pemigatinib), with the latter now being approved by EMA in pretreated patients harboring FGFR2 fusions, and five patients received pembrolizumab (Fig. d). Follow-up data are available for 33 patients. Two patients with a very high tumor burden at baseline did not reach the 3-month follow-up due to tumor progression. The first imaging was performed after a median time of 11 weeks. The overall response rate was 21.2% with 4 (12.1%) and 3 (9.1%) partial and complete responses, respectively. Disease stabilization was reached as best response in additional 13 patients, resulting in an overall disease control rate of 60.6%. All three patients with a complete response received immunotherapy on the basis of microsatellite instability. At the time of data cutoff, a total of 20 patients had progressed under targeted therapy, of which 13 presented with early progression in the first 3-month follow-up. In patients with disease control, the shortest and longest duration of response or stabilization was 116 and 1143 days, respectively, with a median of 348 days (mean 405 days) (Figs. and ). Progression-free survival (PFS) under molecular therapy compared to progression-free survival under the prior treatment regimen was not statistically different but showed a trend in favor of the targeted therapy (Fig. a). Furthermore, patients that responded at 3 months after initiation of targeted therapy had a longer progression-free survival under molecular therapy than under the immediate prior treatment regimen (Fig. b, p value of 0.007), indicating that early response might serve as a surrogate marker for treatment efficacy. Targeted gene panels cover a distinct set of regions within the genome and serve as powerful and cost-effective tools to identify therapeutically relevant alterations in solid malignancies. The size of the panels varies considerably, ranging from only few genes with direct therapeutic implications, to larger panels that detect also more rare genetic variants, or recurrent alterations without direct therapeutic implications. Between March 2019 and April 2020, 118 patients received tumor-genetic testing via panel sequencing in our GI oncology unit. The most frequently applied panel was the Oncomine Comprehensive Cancer Assay v3 that was performed in-house and covers 161 of the most relevant cancer driver genes based on an amplicon approach (109/118). In contrast, the Foundation One CDx assay is a hybrid capture approach and was performed in a small subset of patients (9/118) by an affiliated external service provider. The cohort that received genetic testing beyond the current standards (such as determination of KRAS status in left-sided colorectal cancer) was heavily biased towards cholangiocellular carcinomas (48.3%), followed by colon carcinomas (16.9%), pancreatic carcinomas (14.4%) and gastric carcinomas (7.6%) (Fig. a). The mean duration from material submission to receipt of the results was 34.5 days, with longer periods often caused by delayed transfer of tumor samples from external pathologies. The quantity or quality of the tissue was insufficient for sequencing in 13 cases, leading to the subsequent exclusion of these samples from the analysis. In five cases, a re-biopsy was performed. In 101/118 cases at least one genetic alteration was detected, whereas no alterations were reported in 17 samples. Seven out of these 17 cases were re-sequenced using the FoundationOne CDx assay, which led to the detection of at least one alteration in all cases. In two patients, these results were of direct therapeutic significance due to detection of FGFR2 fusions. The most frequently detected genetic alterations were detected in TP53 (34.5%), followed by KRAS (31%), IDH (7%), FGFR2 (7%), BRAF (7%) and ERBB2/Her2 (5.3%) (Fig. b). The number of mutations varied between one and nine, with an average of 2.78 genetic aberrations per patient. In 22/113 cases, tier I lesions as defined by the ESMO Scale of Clinical Actionability for molecular Targets (ESCAT) were identified [evidence level of ESCAT IA ( n = 12/29), IB ( n = 7/29) and IC ( n = 3/29)] (Mateo et al. ) and in seven cases, the evidence reached an ESCAT level of II or III. A cohort of 53 patients with actionable lesions was identified via in-house panel sequencing ( n = 34) or, especially in patients with colon cancer, by targeted sequencing of hotspot regions (such as BRAF ) as well as testing for microsatellite stability or PDL-1 expression ( n = 19) (Fig. ). Baseline characteristics of all patients are presented in supplementary table 1. While 43 applications for cost coverage for individualized treatments were submitted to the health insurance companies, the remaining ten patients received an FGFR Inhibitor in the context of a compassionate use program. With 36% each, the most frequent disease entities were cholangiocellular carcinoma and colorectal carcinoma (Fig. a). In part, patients were heavily pretreated with a median of three prior therapies (mean 2.78). The most frequent actionable lesions were activating BRAF mutations (V600E, n = 14; D594G, n = 2; 30.2%, 14/16 occurred in patients with CRC), activating FGFR2 mutations ( n = 2) or FGFR2 fusions ( n = 8) (18.8%, all of which occurred in patients with CCA), and microsatellite instability/dMMR (17%), as assessed by PCR and immunohistochemistry (Fig. b, c). Cost coverage requests were based on proof of concept from phase III (69.8%) and phase II trials (30.2%) (Suppl. Table 2). 79% ( n = 34) of the approved treatments were categorized as ESCAT level I (IA or IC), 21% of the applications were categorized into a lower ESCAT level. 26 applications (60.5%) were approved by the health insurance companies upon the initial request. In twelve cases (27.9%) the first application was rejected (seven with ESCAT IA and five with a lower ESCAT level). Consequently, seven patients filed an appeal, which was granted in three cases (Fig. a). In the rejection letters, an alternative treatment was usually suggested, which, however, we had deemed either less promising or too toxic for the patient. In some cases, recommendations were given that were clearly not indicated, such as the administration of anti-EGFR substances in KRAS -mutated colorectal carcinoma. The timeframe required by the insurance agencies to process and respond to the requests varied profoundly. On average, the median duration from application to first feedback from the health insurance company regarding acceptance or rejection was 31 days (ranging between 4 and 79 days) (Fig. b). According to section 13 paragraph 3a Volume V of the German Social Insurance Code, the legal period for processing an application is 3 weeks, with a maximum extension to 5 weeks if a medical expert opinion is required. In 11 of 43 cases (26%) the five-week cutoff was exceeded, which, in some cases, significantly delayed therapy initiation. Seven patients died before ( n = 4) or shortly after ( n = 3) receiving confirmation of cost coverage and prior to initiation of the targeted therapy. In total, a median of 75 days elapsed from the time of initiation of molecular diagnostics, to the start of molecular therapy (Fig. c). In particular, there were significant delays due to tissue logistics (transfer from external pathologies), and lengthy processing of applications by the health insurance companies. In individual cases, this led to a cumulative delay in therapy initiation of up to 6 months. Individualized treatments based on molecular diagnostics were initiated in 35 patients between March 2019 and April 2020, either following approval by the health insurance company ( n = 25), or in the context of a compassionate use program ( n = 10). The most frequent therapies were the combination of cetuximab, encorafenib and binimetinib according to the BEACON trial ( n = 8), which has by now received EMA approval (Kopetz et al. ). Ten patients with intrahepatic cholangiocarcinoma and FGFR2 alterations were treated with an FGFR inhibitor (derazantinib or pemigatinib), with the latter now being approved by EMA in pretreated patients harboring FGFR2 fusions, and five patients received pembrolizumab (Fig. d). Follow-up data are available for 33 patients. Two patients with a very high tumor burden at baseline did not reach the 3-month follow-up due to tumor progression. The first imaging was performed after a median time of 11 weeks. The overall response rate was 21.2% with 4 (12.1%) and 3 (9.1%) partial and complete responses, respectively. Disease stabilization was reached as best response in additional 13 patients, resulting in an overall disease control rate of 60.6%. All three patients with a complete response received immunotherapy on the basis of microsatellite instability. At the time of data cutoff, a total of 20 patients had progressed under targeted therapy, of which 13 presented with early progression in the first 3-month follow-up. In patients with disease control, the shortest and longest duration of response or stabilization was 116 and 1143 days, respectively, with a median of 348 days (mean 405 days) (Figs. and ). Progression-free survival (PFS) under molecular therapy compared to progression-free survival under the prior treatment regimen was not statistically different but showed a trend in favor of the targeted therapy (Fig. a). Furthermore, patients that responded at 3 months after initiation of targeted therapy had a longer progression-free survival under molecular therapy than under the immediate prior treatment regimen (Fig. b, p value of 0.007), indicating that early response might serve as a surrogate marker for treatment efficacy. The clinical relevance of precision oncology is increasingly recognized for solid malignancies, including gastrointestinal cancers. Targeted treatments can extend the therapeutic spectrum on an individual patient’s basis and, in some cases, have the potential to significantly alter the clinical course of the disease. Treatment-relevant genetic alterations are frequently diagnosed by panel sequencing. To ensure that efficient therapies are not withheld from the patients, it is critical to choose molecular diagnostics that are capable of detecting all therapeutically relevant genetic alterations. The selection of a panel is often heavily biased by the diagnostic procedures that are established at the local molecular pathology, and may represent a compromise between cost-effectiveness and diagnostic depth. Some focused panels are customized for specific tumor entities, and may therefore fail to provide sufficiently comprehensive information if applied to other cancers. The importance of matching the panel diagnostics to the genetic landscape of the individual entities is exemplified by biliary tract cancers: while FGFR2 fusions are nearly absent in extrahepatic cholangiocarcinomas, they occur with a high frequency (10–15%) in patients with intrahepatic cholangiocarcinoma (Lamarca et al. ). Therefore, although a specific panel might be well suited for diagnostic workup in tumors that arise from the extrahepatic bile ducts, it might fail to detect critical alterations of the intrahepatic counterparts. An additional and important caveat, which can be easily overlooked by the treating physicians, is, that even if the panel lists specific gene names, not all platforms necessarily cover the entirety of relevant alterations. As an example, a post hoc analysis of a recent clinical trial (FIGHT-202) revealed that only 50% of the detected FGFR2 fusions had been described before (Abou-Alfa et al. ; Silverman et al. ). Based on the hybrid capture technology of the Foundation One CDx assay that was used as companion diagnostics within the FIGHT-202 trial, chromosomal FGFR2 rearrangements could be detected without prior knowledge of the partner gene, whereas amplicon-based panels would have failed to detect rearrangements for which specific partner gene primers were missing from the sequencing reaction. The Oncomine Comprehensive Cancer panel v3, for instance, currently only detects 25 different FGFR2 fusions from the > 150 FGFR fusion documented to date. In line with this disparity, we identified FGFR2 fusions in two out of nine patients from our local cohort which had not been detected by the amplicon approach. Ideally, however, repetitive rounds of sequencing diagnostics should be avoided by choosing the most suitable testing strategies to circumvent unnecessary cost and time loss. To match patients with optimized molecular tests, a close interaction between molecular pathology and the clinical care providers is crucial, and we advocate that these diagnostics should be best performed in centralized referral centers. Traditionally, pathologists were primarily involved in the initial organ-specific classification of the malignant disease. Now, the molecular pathologist is becoming more visible in daily clinical practice, as informed clinical decision making warrants the careful evaluation and interpretation of sequencing results. The importance of this interaction is reflected by the growing number of molecular tumor boards (Hoefflin et al. ). Finally, standardized “tracking” systems for tissues shipped from external pathologies are lacking, and uncertainty concerning the whereabouts of the materials that are needed for panel sequencing can further complicate the diagnostic processes and prolong treatment initiation. Together, this leads to an unacceptable number of patients that deteriorate prior to initiation of targeted therapies. Our single-center experience illustrates that the integration of precision medicine in clinical treatment concepts continues to be a challenge in GI oncology in Germany: Especially in “rare” malignancies such as cancers of the biliary system, the individual genetically defined subgroups are small, which often hampers patient accrual for precision oncology trials. Positive data from phase III trials, however, are commonly expected before targeted agents gain approval by the European Medical Agency. Prior to EMA approval, physicians are usually required to file for cost coverage by the insurance providers based on the individual clinical records, which can be a time-consuming process. In some cases, the timeframe until a response was issued by the German insurance providers exceeded 5 weeks. Furthermore, the experience of our real-world cohort shows that the reasons for denial of coverage are often not based on a lack of evidence. In multiple cases, applications were rejected despite a high level of evidence (ESCAT IA). Early access/compassionate use programs can fill the gap between clinically meaningful data and EMA approval. In the absence of suitable clinical trials, the possibility to include patients into these programs should be explored. In our analysis, the comparison of progression-free survival favored the personalized therapy approach, and we observed a significant increase in the progression-free survival under molecular therapy compared to the immediate prior therapy in those patients that initially responded to the personalized approach. To further optimize outcome, and avoid toxicity as well as unnecessary cost of targeted drugs, molecular as well as clinical “biomarkers”, such as optimal timing of response assessment, should be evaluated consistently and assessed for their suitability as early predictors of treatment efficacy. Of note, patients were frequently referred to our center at advanced disease stages, and after several lines of therapy. Therefore, it is well conceivable, that the efficiency of precision oncology in real-world cohorts from GI cancer patients do not meet the responses reported from clinical trials. Especially in cancers with a limited number of effective or approved treatments available, such as cholangiocarcinoma, we strongly advocate the early implementation of molecular diagnostics, even though insurance agencies in Germany usually grant cost coverage for individualized approaches only after exhaustion of standard treatments. This early testing strategy is also endorsed by recommendations published by the European Society of Medical Oncology (ESMO) (Mosele et al. ). In summary, we believe that the potential of precision medicine in GI cancers is not yet being fully exploited in Germany and that the hurdles that need to be taken into account before a patient receives a molecular therapy are substantial and time consuming. More precise guidelines on the initiation of NGS diagnostics, and routine referral to reference centers with ample experience in the contextual interpretation of and reaction to molecular results, would likely be beneficial in the implementation of precision oncology in Germany. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 34 KB) |
Observational, causal relationship and shared genetic basis between cholelithiasis and gastroesophageal reflux disease: evidence from a cohort study and comprehensive genetic analysis | 1a829f4a-ddcb-46a5-936b-ef92c70961c8 | 11943489 | Pathologic Processes[mh] | Cholelithiasis, a condition characterized by lithic deposits of either cholesterol or bilirubin in the gallbladder or the bile ducts, is one of the most prevalent digestive disorders, imposing a significant socioeconomic burden . Cholelithiasis affects nearly 20% of the adult population worldwide, with a continuously rising incidence rate . The development of cholelithiasis involves intricate mechanisms, encompassing genetic and environmental factors and their interactions. Gastrointestinal defects in patients with cholelithiasis have raised widespread concerns and require further exploration . Gastroesophageal reflux disease (GERD) is a common gastrointestinal disorder typically characterized by recurrent heartburn and regurgitation . This condition could pose a substantial public health challenge, owing to its association with a spectrum of subsequent severe complications, including Barrett’s esophagus, esophageal stenosis, and esophageal adenocarcinoma . Therefore, early identification and vigilant monitoring of individuals at high risk for GERD can facilitate timely intervention, potentially mitigating the severity of the disease and decreasing the risk of GERD and GERD-related complications. Several studies have investigated the correlation between cholelithiasis and the risk of GERD . Nonetheless, the existing findings have been inconsistent and insufficient, lacking support from prospective studies. For instance, a retrospective, observational study involving 1,381,004 individuals with gallstone disease found that 40% of the patients had concurrent GERD . On the contrary, a case-control study, comprising 790 cases and 407 controls demonstrated no associations between the presence of cholelithiasis and GERD . Most of the previous studies are outdated and statistically underpowered due to small sample sizes. In addition, these observational studies are prone to some inevitable defects such as potential reverse causality and confounding . The causal association between cholelithiasis and GERD remains obscure. Therefore, large datasets and updated methodologies are warranted to disentangle the conflicting relationship between them and to further reveal the underlying genetic underpinnings. The evolution of genetic statistical methods has facilitated the understanding of the interconnected genetic basis of complex diseases, providing novel perspectives on the potential biological mechanisms behind the epidemiologic correlations. In our study, we initiated a comprehensive evaluation of the correlations and the shared genetic basis between cholelithiasis and GERD via a prospective cohort study, Mendelian randomization (MR) analyses, and a range of genetic analyses (Fig. ).
Data summary Prospective data from the UK Biobank The UK Biobank (UKB) is a large-scale prospective cohort study with 502,368 participants aged 37–73 years who were recruited between 2006 and 2010 . Participants visited 1 of 22 assessment centers across England, Scotland, and Wales to complete touch-screen questionnaires, verbal interviews, and physical measurements at recruitment. Data on hospital admissions were collected regularly through linkages to the Scottish Morbidity Records, the Patient Episode Database, and Health Episode Statistics. Information on death was obtained from the National Health Service Central Register and National Health Service Digital. This study was conducted under the UK Biobank project 83339. The UK Biobank received ethical approval from the North West Multi-Centre Research Ethics Committee (21/NW/0157, 16/NW/0274, and 11/NW/0382). Diagnostic information was sourced from primary care data, hospital admission data, and death registry records. We defined diagnoses according to the International Classification of Diseases, 10th revision (ICD-10) code K80 for cholelithiasis and K21 for GERD, respectively. As shown in the flowchart ( ), participants with self-reported cholelithiasis or GERD ( N = 13,320) or without follow-up data ( N = 1,298) were excluded, leaving 487,750 individuals. To ensure a similar distribution of follow-up time between groups, the index date of participants in the control group was manually assigned based on the distribution of the first diagnosis date of those patients with diseases of interest when conducting corresponding analyses. After excluding 69,862 participants with a history of GERD before the index date, 417,888 participants were finally included to analyze the association between cholelithiasis and GERD. After excluding 62,031 participants with a history of cholelithiasis before the index date, 425,719 participants were finally included to analyze the association between GERD and cholelithiasis. Follow-up time was calculated from the index date to the time to diagnosis of outcome of interest or the censoring date (30 October 2022) or death, whichever occurred first. Genome-wide association study datasets Genome-wide association study (GWAS) summary data for cholelithiasis were obtained from the FinnGen databases comprising 32,894 cholelithiasis cases and 301,383 controls of European ancestry . The cholelithiasis dataset was defined with the ICD-10 code K80, ICD-9 code 574, and ICD-8 code 574. GWAS summary data for GERD were obtained from a meta-analysis of 332,601 individuals including 71,522 cases and 261,079 controls of European ancestry combining the 2 largest existing genetic studies of GERD (UKB and the QSkin study) . The phenotypes ranged from self-reported GERD, ICD-10, and use of GERD medication. For the replication dataset of GERD, we utilized the summary data with 129,080 European ancestry cases and 473,524 European ancestry controls from the UK and Australian population . Detailed information of sample collection, quality control, and imputation process for these datasets has been explained in the original articles . There is no population overlap between the datasets for cholelithiasis and GERD. The GWAS summary datasets utilized in this research are publicly available, and the ethical statements can be found in the original publications corresponding to the data. Patients or the public were not involved in the design, conduct, reporting, or dissemination plans of our research. Statistical analysis Observational analysis To handle the missing data of the covariates, we applied multiple imputation by chained equations (MICE packages in R) with a predictive mean matching method that combining regression models and nearest-neighbor matching. Five imputations and 50 iterations were performed, and 1 of the 5 imputations was selected randomly as the final imputed data set. We constructed a Cox proportional hazards regression model with exposure to cholelithiasis to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs). The proportional hazards assumption was tested by Schoenfeld residuals tests, and no evidence of violation was found. Three sets of adjustments were established to minimize the role of confounding. Model 1 was without any adjustments. Model 2 was adjusted only for age and sex. Model 3 was further adjusted for ethnicity, average total annual household income, deprivation index, body mass index (BMI), alcohol consumption, smoking status, physical activity, education, fresh fruit consumption, raw vegetable consumption, tea consumption, coffee consumption, hypertension, diabetes, renal failure, myocardial infarction, stroke, chronic obstructive pulmonary disease, asthma, anxiety, depression, and peptic ulcer. All analyses were performed using RStudio ( RRID:SCR_000432 ) and R 4.2.1 software. Statistical significance was set at a 2-tailed P value of less than 0.05. Mendelian randomization analysis We performed the bidirectional MR analysis to explore the potential causal relationship between cholelithiasis and GERD, using R packages “TwoSampleMR” , and “MR-PRESSO” in R software (version 4.2.1). MR analysis utilizes genetic variants as instruments, and the validity of its causal inference relies on 3 critical assumptions of independence, relevance, and exclusion restriction . These assumptions are indispensable for mitigating bias and establishing causality. Only significant single nucleotide polymorphisms (SNPs) independently associated with the exposure at a P threshold of 5 × 10 –8 and satisfying the linkage disequilibrium (LD) criteria of r 2 < 0.001 and kb > 10,000 were identified as instruments in MR studies. Additionally, we searched the instrumental variables in the GWAS catalog to identify potential confounders like BMI, smoking, and certain dietary habits and excluded the confounding variants from further analyses. We employed inverse variance weighting (IVW) as the main MR approach, complemented by 3 additional sensitivity analysis methods, including MR-Egger , weighted median , and weighted mode , to detect the causal relationships between cholelithiasis and GERD. Different methods were based on different assumptions concerning the influence of horizontal pleiotropy. The IVW MR model, assuming balanced pleiotropy, applies multiplicative random effects to meta-analyze the Wald estimates of each SNP . The MR-Egger model allows the uncorrelated directional pleiotropy by adding a nonzero intercept that relaxes the assumption of relevance of selected genetic variants . The weighted median and weighted mode models remain robust when up to 50% or more of genetic variants are valid, which exhibit greater resilience to pleiotropy . We conducted the MR-Egger intercept test, Cochran’s Q statistic, MR-PRESSO, and leave-one-out analysis to evaluate the heterogeneity, pleiotropy, and potential outliers of the MR results. If heterogeneity is detected in the MR analysis ( P < 0.05), we would recalculate the MR estimates after the removal of outliers identified with a P value of less than 1 in the MR-PRESSO outlier test to ensure the robustness of the MR results. The MR analysis in this research has been documented in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline specific for MR study. Global genetic correlation analysis To quantify the heritability of each trait and the global genetic correlation between cholelithiasis and GERD, we applied linkage disequilibrium score regression (LDSC) method with Python 2.7 . Based on precomputed LD scores derived from 1000 Genomes reference data of European population, we selected SNPs that matched the reference panel (minor allele frequency [MAF] > 0.01 and INFO score > 0.9) in the GWAS datasets . We used univariate LDSC to estimate SNP heritability for each trait and bivariate LDSC to calculate the genetic correlations between cholelithiasis and GERD with and without constraining the intercept. Based on the prevalence rates of 20% for cholelithiasis and 17.1% for GERD, we calculated the liability scale of the reported heritability for both traits. The genetic correlation with a P value less than 0.05 was considered significant . Additionally, we employed the genetic covariance analyzer (GNOVA) as a supplementary method to validate the genetic correlations. The steps of quality control on GWAS datasets are similar to the LDSC method . More detailed descriptions are in the original study . Based on the framework of the annotation-stratified genetic covariance estimation, GNOVA provides a more powerful statistical inference of the shared genetic basis between complex traits and shows higher estimation accuracy. Threshold of P < 0.05 was regarded as strong evidence for MAF-stratified genetic correlation . Local genetic correlation analysis To identify whether cholelithiasis and GERD have genetic correlation in local genomic region, we further applied the heritability estimator from summary statistics (ρ-HESS) with Python 2.7 . We first calculated the LD block and eigenvalues by referring to the 1000 Genomes Project of Europeans. Then, we explored the local SNP heritability for each trait and estimated the local genetic correlation in 1,613 approximately LD-independent regions . Suggestive genetic associations with a P value less than 0.05 were noted. Similarly, pairwise-GWAS (GWAS-PW) was supplemented to explore the significant shared local regions . Based on the Bayesian statistical framework, GWAS-PW calculated the posterior probabilities of association (PPAs) for each genomic region across 4 models. Genomic regions with a PPA of model 3 larger than 0.5 were considered significantly associated with both traits, in accordance with a previous article . Cross–trait meta–analysis To detect the shared genetic variants in cholelithiasis and GERD, we performed multitrait analysis of GWAS (MTAG) . MTAG is based on a fundamental assumption that all SNPs exhibit the same variance–covariance matrix of effect sizes and heritability across traits. To meet the assumption, we rigorously filtered the MTAG SNP with MAF ≥1% and sample size ≥75% of the 90th percentile and dropped the outliers . By joint analysis of multiple traits, MTAG substantially enhances the statistical power to detect the genetic associations for each trait and generate trait-specific estimates for each SNP. To identify the significant and independent loci, we utilized the threshold P MTAG < 5 × 10 –8 and the “clumping” function of PLINK (settings: clump_p1 = 5e −8 , clump_p2 = 1e −5 , clump_r 2 = 0.2, clump_kb = 500) . Cross-phenotype association test (CPASSOC) is a complementary method to deduce the shared risk SNPs between complex traits . Compared with the single-trait analysis, CPASSOC improves statistical power and reasonably controlled type I error rate. Considering the heterogeneity effects for different phenotypes, we primarily used the heterogeneous version of cross-phenotype statistic (Shet) method to integrate association evidence of different but correlated traits . Given the inherent variability induced by the random sampling analysis embedded in this method, we set a random seed to 123 to ensure a reproducible result. After getting the estimates, we identified the independent loci using the “clumping” function of PLINK (settings as before). The variant in each locus with the smallest P value was regarded as the index SNP. Index SNPs that met the criteria of P CPASSOC < 5 × 10 –8 and P each trait < 1 × 10 –3 were deemed significant pleiotropic SNPs. Newly discovered pleiotropic SNPs were defined as those significant pleiotropic SNPs that were not genome-wide significant (5 × 10 –8 < P each trait < 1 × 10 –3 ) and independent ( r 2 < 0.20) of earlier identified trait-related genome-wide significant SNPs, and all their adjacent SNPs (±500 kb) did not reach P < 5 × 10 –8 in each GWAS dataset. We used dbSNP and 3DSNP for detailed functional annotation of the identified pleiotropic SNPs. Transcriptome–wide association analysis Numerous genetic variants impact intricate traits through the regulation of gene expression. To identify significant gene–trait associations, we implemented a transcriptome-wide association scan (TWAS) leveraging FUSION software . Based on the LD reference data of European 1000 Genomes, we converted the GWASs of cholelithiasis and GERD into an LD-score format. We prioritized the trait-related tissues; thus, we prepared the expression quantitative trait loci (eQTL) data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 (Genotype-Tissue Expression, version 8) . By integrating the precomputed phenotypic summary data and corresponding eQTL data, we identified significant tissue-specific genes with a false discovery rate (FDR) <0.05 for each trait and selected genes that overlapped between cholelithiasis and GERD in the same tissue. Summary data-based Mendelian randomization (SMR) analysis is a complementary method to deduce the causative genes underlying cholelithiasis and GERD . We used the eQTL data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 and cis -eQTL data of whole blood from the eQTLGen consortium . The heterogeneity in dependent instruments (HEIDI) test was conducted to distinguish pleiotropy or causality from linkage. We primarily focused on the genes with FDR < 0.05 and passed the P value thresholds for the HEIDI test ( P HEIDI > 0.05) . The pathway enrichment analyses and biomolecular network analyses To gain shared biological insights into cholelithiasis and GERD, we conducted functional annotation of the pleiotropic SNPs and shared genes using multiple methods. We utilized the knowledge-based Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) databases to perform pathway enrichment analyses for identifying pathways associated with these genes, using the ClusterProfiler R package ( RRID:SCR_016884 ) . P values from the pathway enrichment analyses were adjusted for multiple comparisons through the FDR approach. In addition, we utilized the STRING database to find the interactions mapped to the pleiotropic SNPs and shared functional genes.
Prospective data from the UK Biobank The UK Biobank (UKB) is a large-scale prospective cohort study with 502,368 participants aged 37–73 years who were recruited between 2006 and 2010 . Participants visited 1 of 22 assessment centers across England, Scotland, and Wales to complete touch-screen questionnaires, verbal interviews, and physical measurements at recruitment. Data on hospital admissions were collected regularly through linkages to the Scottish Morbidity Records, the Patient Episode Database, and Health Episode Statistics. Information on death was obtained from the National Health Service Central Register and National Health Service Digital. This study was conducted under the UK Biobank project 83339. The UK Biobank received ethical approval from the North West Multi-Centre Research Ethics Committee (21/NW/0157, 16/NW/0274, and 11/NW/0382). Diagnostic information was sourced from primary care data, hospital admission data, and death registry records. We defined diagnoses according to the International Classification of Diseases, 10th revision (ICD-10) code K80 for cholelithiasis and K21 for GERD, respectively. As shown in the flowchart ( ), participants with self-reported cholelithiasis or GERD ( N = 13,320) or without follow-up data ( N = 1,298) were excluded, leaving 487,750 individuals. To ensure a similar distribution of follow-up time between groups, the index date of participants in the control group was manually assigned based on the distribution of the first diagnosis date of those patients with diseases of interest when conducting corresponding analyses. After excluding 69,862 participants with a history of GERD before the index date, 417,888 participants were finally included to analyze the association between cholelithiasis and GERD. After excluding 62,031 participants with a history of cholelithiasis before the index date, 425,719 participants were finally included to analyze the association between GERD and cholelithiasis. Follow-up time was calculated from the index date to the time to diagnosis of outcome of interest or the censoring date (30 October 2022) or death, whichever occurred first. Genome-wide association study datasets Genome-wide association study (GWAS) summary data for cholelithiasis were obtained from the FinnGen databases comprising 32,894 cholelithiasis cases and 301,383 controls of European ancestry . The cholelithiasis dataset was defined with the ICD-10 code K80, ICD-9 code 574, and ICD-8 code 574. GWAS summary data for GERD were obtained from a meta-analysis of 332,601 individuals including 71,522 cases and 261,079 controls of European ancestry combining the 2 largest existing genetic studies of GERD (UKB and the QSkin study) . The phenotypes ranged from self-reported GERD, ICD-10, and use of GERD medication. For the replication dataset of GERD, we utilized the summary data with 129,080 European ancestry cases and 473,524 European ancestry controls from the UK and Australian population . Detailed information of sample collection, quality control, and imputation process for these datasets has been explained in the original articles . There is no population overlap between the datasets for cholelithiasis and GERD. The GWAS summary datasets utilized in this research are publicly available, and the ethical statements can be found in the original publications corresponding to the data. Patients or the public were not involved in the design, conduct, reporting, or dissemination plans of our research.
The UK Biobank (UKB) is a large-scale prospective cohort study with 502,368 participants aged 37–73 years who were recruited between 2006 and 2010 . Participants visited 1 of 22 assessment centers across England, Scotland, and Wales to complete touch-screen questionnaires, verbal interviews, and physical measurements at recruitment. Data on hospital admissions were collected regularly through linkages to the Scottish Morbidity Records, the Patient Episode Database, and Health Episode Statistics. Information on death was obtained from the National Health Service Central Register and National Health Service Digital. This study was conducted under the UK Biobank project 83339. The UK Biobank received ethical approval from the North West Multi-Centre Research Ethics Committee (21/NW/0157, 16/NW/0274, and 11/NW/0382). Diagnostic information was sourced from primary care data, hospital admission data, and death registry records. We defined diagnoses according to the International Classification of Diseases, 10th revision (ICD-10) code K80 for cholelithiasis and K21 for GERD, respectively. As shown in the flowchart ( ), participants with self-reported cholelithiasis or GERD ( N = 13,320) or without follow-up data ( N = 1,298) were excluded, leaving 487,750 individuals. To ensure a similar distribution of follow-up time between groups, the index date of participants in the control group was manually assigned based on the distribution of the first diagnosis date of those patients with diseases of interest when conducting corresponding analyses. After excluding 69,862 participants with a history of GERD before the index date, 417,888 participants were finally included to analyze the association between cholelithiasis and GERD. After excluding 62,031 participants with a history of cholelithiasis before the index date, 425,719 participants were finally included to analyze the association between GERD and cholelithiasis. Follow-up time was calculated from the index date to the time to diagnosis of outcome of interest or the censoring date (30 October 2022) or death, whichever occurred first.
Genome-wide association study (GWAS) summary data for cholelithiasis were obtained from the FinnGen databases comprising 32,894 cholelithiasis cases and 301,383 controls of European ancestry . The cholelithiasis dataset was defined with the ICD-10 code K80, ICD-9 code 574, and ICD-8 code 574. GWAS summary data for GERD were obtained from a meta-analysis of 332,601 individuals including 71,522 cases and 261,079 controls of European ancestry combining the 2 largest existing genetic studies of GERD (UKB and the QSkin study) . The phenotypes ranged from self-reported GERD, ICD-10, and use of GERD medication. For the replication dataset of GERD, we utilized the summary data with 129,080 European ancestry cases and 473,524 European ancestry controls from the UK and Australian population . Detailed information of sample collection, quality control, and imputation process for these datasets has been explained in the original articles . There is no population overlap between the datasets for cholelithiasis and GERD. The GWAS summary datasets utilized in this research are publicly available, and the ethical statements can be found in the original publications corresponding to the data. Patients or the public were not involved in the design, conduct, reporting, or dissemination plans of our research.
Observational analysis To handle the missing data of the covariates, we applied multiple imputation by chained equations (MICE packages in R) with a predictive mean matching method that combining regression models and nearest-neighbor matching. Five imputations and 50 iterations were performed, and 1 of the 5 imputations was selected randomly as the final imputed data set. We constructed a Cox proportional hazards regression model with exposure to cholelithiasis to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs). The proportional hazards assumption was tested by Schoenfeld residuals tests, and no evidence of violation was found. Three sets of adjustments were established to minimize the role of confounding. Model 1 was without any adjustments. Model 2 was adjusted only for age and sex. Model 3 was further adjusted for ethnicity, average total annual household income, deprivation index, body mass index (BMI), alcohol consumption, smoking status, physical activity, education, fresh fruit consumption, raw vegetable consumption, tea consumption, coffee consumption, hypertension, diabetes, renal failure, myocardial infarction, stroke, chronic obstructive pulmonary disease, asthma, anxiety, depression, and peptic ulcer. All analyses were performed using RStudio ( RRID:SCR_000432 ) and R 4.2.1 software. Statistical significance was set at a 2-tailed P value of less than 0.05. Mendelian randomization analysis We performed the bidirectional MR analysis to explore the potential causal relationship between cholelithiasis and GERD, using R packages “TwoSampleMR” , and “MR-PRESSO” in R software (version 4.2.1). MR analysis utilizes genetic variants as instruments, and the validity of its causal inference relies on 3 critical assumptions of independence, relevance, and exclusion restriction . These assumptions are indispensable for mitigating bias and establishing causality. Only significant single nucleotide polymorphisms (SNPs) independently associated with the exposure at a P threshold of 5 × 10 –8 and satisfying the linkage disequilibrium (LD) criteria of r 2 < 0.001 and kb > 10,000 were identified as instruments in MR studies. Additionally, we searched the instrumental variables in the GWAS catalog to identify potential confounders like BMI, smoking, and certain dietary habits and excluded the confounding variants from further analyses. We employed inverse variance weighting (IVW) as the main MR approach, complemented by 3 additional sensitivity analysis methods, including MR-Egger , weighted median , and weighted mode , to detect the causal relationships between cholelithiasis and GERD. Different methods were based on different assumptions concerning the influence of horizontal pleiotropy. The IVW MR model, assuming balanced pleiotropy, applies multiplicative random effects to meta-analyze the Wald estimates of each SNP . The MR-Egger model allows the uncorrelated directional pleiotropy by adding a nonzero intercept that relaxes the assumption of relevance of selected genetic variants . The weighted median and weighted mode models remain robust when up to 50% or more of genetic variants are valid, which exhibit greater resilience to pleiotropy . We conducted the MR-Egger intercept test, Cochran’s Q statistic, MR-PRESSO, and leave-one-out analysis to evaluate the heterogeneity, pleiotropy, and potential outliers of the MR results. If heterogeneity is detected in the MR analysis ( P < 0.05), we would recalculate the MR estimates after the removal of outliers identified with a P value of less than 1 in the MR-PRESSO outlier test to ensure the robustness of the MR results. The MR analysis in this research has been documented in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline specific for MR study. Global genetic correlation analysis To quantify the heritability of each trait and the global genetic correlation between cholelithiasis and GERD, we applied linkage disequilibrium score regression (LDSC) method with Python 2.7 . Based on precomputed LD scores derived from 1000 Genomes reference data of European population, we selected SNPs that matched the reference panel (minor allele frequency [MAF] > 0.01 and INFO score > 0.9) in the GWAS datasets . We used univariate LDSC to estimate SNP heritability for each trait and bivariate LDSC to calculate the genetic correlations between cholelithiasis and GERD with and without constraining the intercept. Based on the prevalence rates of 20% for cholelithiasis and 17.1% for GERD, we calculated the liability scale of the reported heritability for both traits. The genetic correlation with a P value less than 0.05 was considered significant . Additionally, we employed the genetic covariance analyzer (GNOVA) as a supplementary method to validate the genetic correlations. The steps of quality control on GWAS datasets are similar to the LDSC method . More detailed descriptions are in the original study . Based on the framework of the annotation-stratified genetic covariance estimation, GNOVA provides a more powerful statistical inference of the shared genetic basis between complex traits and shows higher estimation accuracy. Threshold of P < 0.05 was regarded as strong evidence for MAF-stratified genetic correlation . Local genetic correlation analysis To identify whether cholelithiasis and GERD have genetic correlation in local genomic region, we further applied the heritability estimator from summary statistics (ρ-HESS) with Python 2.7 . We first calculated the LD block and eigenvalues by referring to the 1000 Genomes Project of Europeans. Then, we explored the local SNP heritability for each trait and estimated the local genetic correlation in 1,613 approximately LD-independent regions . Suggestive genetic associations with a P value less than 0.05 were noted. Similarly, pairwise-GWAS (GWAS-PW) was supplemented to explore the significant shared local regions . Based on the Bayesian statistical framework, GWAS-PW calculated the posterior probabilities of association (PPAs) for each genomic region across 4 models. Genomic regions with a PPA of model 3 larger than 0.5 were considered significantly associated with both traits, in accordance with a previous article . Cross–trait meta–analysis To detect the shared genetic variants in cholelithiasis and GERD, we performed multitrait analysis of GWAS (MTAG) . MTAG is based on a fundamental assumption that all SNPs exhibit the same variance–covariance matrix of effect sizes and heritability across traits. To meet the assumption, we rigorously filtered the MTAG SNP with MAF ≥1% and sample size ≥75% of the 90th percentile and dropped the outliers . By joint analysis of multiple traits, MTAG substantially enhances the statistical power to detect the genetic associations for each trait and generate trait-specific estimates for each SNP. To identify the significant and independent loci, we utilized the threshold P MTAG < 5 × 10 –8 and the “clumping” function of PLINK (settings: clump_p1 = 5e −8 , clump_p2 = 1e −5 , clump_r 2 = 0.2, clump_kb = 500) . Cross-phenotype association test (CPASSOC) is a complementary method to deduce the shared risk SNPs between complex traits . Compared with the single-trait analysis, CPASSOC improves statistical power and reasonably controlled type I error rate. Considering the heterogeneity effects for different phenotypes, we primarily used the heterogeneous version of cross-phenotype statistic (Shet) method to integrate association evidence of different but correlated traits . Given the inherent variability induced by the random sampling analysis embedded in this method, we set a random seed to 123 to ensure a reproducible result. After getting the estimates, we identified the independent loci using the “clumping” function of PLINK (settings as before). The variant in each locus with the smallest P value was regarded as the index SNP. Index SNPs that met the criteria of P CPASSOC < 5 × 10 –8 and P each trait < 1 × 10 –3 were deemed significant pleiotropic SNPs. Newly discovered pleiotropic SNPs were defined as those significant pleiotropic SNPs that were not genome-wide significant (5 × 10 –8 < P each trait < 1 × 10 –3 ) and independent ( r 2 < 0.20) of earlier identified trait-related genome-wide significant SNPs, and all their adjacent SNPs (±500 kb) did not reach P < 5 × 10 –8 in each GWAS dataset. We used dbSNP and 3DSNP for detailed functional annotation of the identified pleiotropic SNPs. Transcriptome–wide association analysis Numerous genetic variants impact intricate traits through the regulation of gene expression. To identify significant gene–trait associations, we implemented a transcriptome-wide association scan (TWAS) leveraging FUSION software . Based on the LD reference data of European 1000 Genomes, we converted the GWASs of cholelithiasis and GERD into an LD-score format. We prioritized the trait-related tissues; thus, we prepared the expression quantitative trait loci (eQTL) data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 (Genotype-Tissue Expression, version 8) . By integrating the precomputed phenotypic summary data and corresponding eQTL data, we identified significant tissue-specific genes with a false discovery rate (FDR) <0.05 for each trait and selected genes that overlapped between cholelithiasis and GERD in the same tissue. Summary data-based Mendelian randomization (SMR) analysis is a complementary method to deduce the causative genes underlying cholelithiasis and GERD . We used the eQTL data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 and cis -eQTL data of whole blood from the eQTLGen consortium . The heterogeneity in dependent instruments (HEIDI) test was conducted to distinguish pleiotropy or causality from linkage. We primarily focused on the genes with FDR < 0.05 and passed the P value thresholds for the HEIDI test ( P HEIDI > 0.05) . The pathway enrichment analyses and biomolecular network analyses To gain shared biological insights into cholelithiasis and GERD, we conducted functional annotation of the pleiotropic SNPs and shared genes using multiple methods. We utilized the knowledge-based Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) databases to perform pathway enrichment analyses for identifying pathways associated with these genes, using the ClusterProfiler R package ( RRID:SCR_016884 ) . P values from the pathway enrichment analyses were adjusted for multiple comparisons through the FDR approach. In addition, we utilized the STRING database to find the interactions mapped to the pleiotropic SNPs and shared functional genes.
To handle the missing data of the covariates, we applied multiple imputation by chained equations (MICE packages in R) with a predictive mean matching method that combining regression models and nearest-neighbor matching. Five imputations and 50 iterations were performed, and 1 of the 5 imputations was selected randomly as the final imputed data set. We constructed a Cox proportional hazards regression model with exposure to cholelithiasis to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs). The proportional hazards assumption was tested by Schoenfeld residuals tests, and no evidence of violation was found. Three sets of adjustments were established to minimize the role of confounding. Model 1 was without any adjustments. Model 2 was adjusted only for age and sex. Model 3 was further adjusted for ethnicity, average total annual household income, deprivation index, body mass index (BMI), alcohol consumption, smoking status, physical activity, education, fresh fruit consumption, raw vegetable consumption, tea consumption, coffee consumption, hypertension, diabetes, renal failure, myocardial infarction, stroke, chronic obstructive pulmonary disease, asthma, anxiety, depression, and peptic ulcer. All analyses were performed using RStudio ( RRID:SCR_000432 ) and R 4.2.1 software. Statistical significance was set at a 2-tailed P value of less than 0.05.
We performed the bidirectional MR analysis to explore the potential causal relationship between cholelithiasis and GERD, using R packages “TwoSampleMR” , and “MR-PRESSO” in R software (version 4.2.1). MR analysis utilizes genetic variants as instruments, and the validity of its causal inference relies on 3 critical assumptions of independence, relevance, and exclusion restriction . These assumptions are indispensable for mitigating bias and establishing causality. Only significant single nucleotide polymorphisms (SNPs) independently associated with the exposure at a P threshold of 5 × 10 –8 and satisfying the linkage disequilibrium (LD) criteria of r 2 < 0.001 and kb > 10,000 were identified as instruments in MR studies. Additionally, we searched the instrumental variables in the GWAS catalog to identify potential confounders like BMI, smoking, and certain dietary habits and excluded the confounding variants from further analyses. We employed inverse variance weighting (IVW) as the main MR approach, complemented by 3 additional sensitivity analysis methods, including MR-Egger , weighted median , and weighted mode , to detect the causal relationships between cholelithiasis and GERD. Different methods were based on different assumptions concerning the influence of horizontal pleiotropy. The IVW MR model, assuming balanced pleiotropy, applies multiplicative random effects to meta-analyze the Wald estimates of each SNP . The MR-Egger model allows the uncorrelated directional pleiotropy by adding a nonzero intercept that relaxes the assumption of relevance of selected genetic variants . The weighted median and weighted mode models remain robust when up to 50% or more of genetic variants are valid, which exhibit greater resilience to pleiotropy . We conducted the MR-Egger intercept test, Cochran’s Q statistic, MR-PRESSO, and leave-one-out analysis to evaluate the heterogeneity, pleiotropy, and potential outliers of the MR results. If heterogeneity is detected in the MR analysis ( P < 0.05), we would recalculate the MR estimates after the removal of outliers identified with a P value of less than 1 in the MR-PRESSO outlier test to ensure the robustness of the MR results. The MR analysis in this research has been documented in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline specific for MR study.
To quantify the heritability of each trait and the global genetic correlation between cholelithiasis and GERD, we applied linkage disequilibrium score regression (LDSC) method with Python 2.7 . Based on precomputed LD scores derived from 1000 Genomes reference data of European population, we selected SNPs that matched the reference panel (minor allele frequency [MAF] > 0.01 and INFO score > 0.9) in the GWAS datasets . We used univariate LDSC to estimate SNP heritability for each trait and bivariate LDSC to calculate the genetic correlations between cholelithiasis and GERD with and without constraining the intercept. Based on the prevalence rates of 20% for cholelithiasis and 17.1% for GERD, we calculated the liability scale of the reported heritability for both traits. The genetic correlation with a P value less than 0.05 was considered significant . Additionally, we employed the genetic covariance analyzer (GNOVA) as a supplementary method to validate the genetic correlations. The steps of quality control on GWAS datasets are similar to the LDSC method . More detailed descriptions are in the original study . Based on the framework of the annotation-stratified genetic covariance estimation, GNOVA provides a more powerful statistical inference of the shared genetic basis between complex traits and shows higher estimation accuracy. Threshold of P < 0.05 was regarded as strong evidence for MAF-stratified genetic correlation .
To identify whether cholelithiasis and GERD have genetic correlation in local genomic region, we further applied the heritability estimator from summary statistics (ρ-HESS) with Python 2.7 . We first calculated the LD block and eigenvalues by referring to the 1000 Genomes Project of Europeans. Then, we explored the local SNP heritability for each trait and estimated the local genetic correlation in 1,613 approximately LD-independent regions . Suggestive genetic associations with a P value less than 0.05 were noted. Similarly, pairwise-GWAS (GWAS-PW) was supplemented to explore the significant shared local regions . Based on the Bayesian statistical framework, GWAS-PW calculated the posterior probabilities of association (PPAs) for each genomic region across 4 models. Genomic regions with a PPA of model 3 larger than 0.5 were considered significantly associated with both traits, in accordance with a previous article .
To detect the shared genetic variants in cholelithiasis and GERD, we performed multitrait analysis of GWAS (MTAG) . MTAG is based on a fundamental assumption that all SNPs exhibit the same variance–covariance matrix of effect sizes and heritability across traits. To meet the assumption, we rigorously filtered the MTAG SNP with MAF ≥1% and sample size ≥75% of the 90th percentile and dropped the outliers . By joint analysis of multiple traits, MTAG substantially enhances the statistical power to detect the genetic associations for each trait and generate trait-specific estimates for each SNP. To identify the significant and independent loci, we utilized the threshold P MTAG < 5 × 10 –8 and the “clumping” function of PLINK (settings: clump_p1 = 5e −8 , clump_p2 = 1e −5 , clump_r 2 = 0.2, clump_kb = 500) . Cross-phenotype association test (CPASSOC) is a complementary method to deduce the shared risk SNPs between complex traits . Compared with the single-trait analysis, CPASSOC improves statistical power and reasonably controlled type I error rate. Considering the heterogeneity effects for different phenotypes, we primarily used the heterogeneous version of cross-phenotype statistic (Shet) method to integrate association evidence of different but correlated traits . Given the inherent variability induced by the random sampling analysis embedded in this method, we set a random seed to 123 to ensure a reproducible result. After getting the estimates, we identified the independent loci using the “clumping” function of PLINK (settings as before). The variant in each locus with the smallest P value was regarded as the index SNP. Index SNPs that met the criteria of P CPASSOC < 5 × 10 –8 and P each trait < 1 × 10 –3 were deemed significant pleiotropic SNPs. Newly discovered pleiotropic SNPs were defined as those significant pleiotropic SNPs that were not genome-wide significant (5 × 10 –8 < P each trait < 1 × 10 –3 ) and independent ( r 2 < 0.20) of earlier identified trait-related genome-wide significant SNPs, and all their adjacent SNPs (±500 kb) did not reach P < 5 × 10 –8 in each GWAS dataset. We used dbSNP and 3DSNP for detailed functional annotation of the identified pleiotropic SNPs.
Numerous genetic variants impact intricate traits through the regulation of gene expression. To identify significant gene–trait associations, we implemented a transcriptome-wide association scan (TWAS) leveraging FUSION software . Based on the LD reference data of European 1000 Genomes, we converted the GWASs of cholelithiasis and GERD into an LD-score format. We prioritized the trait-related tissues; thus, we prepared the expression quantitative trait loci (eQTL) data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 (Genotype-Tissue Expression, version 8) . By integrating the precomputed phenotypic summary data and corresponding eQTL data, we identified significant tissue-specific genes with a false discovery rate (FDR) <0.05 for each trait and selected genes that overlapped between cholelithiasis and GERD in the same tissue. Summary data-based Mendelian randomization (SMR) analysis is a complementary method to deduce the causative genes underlying cholelithiasis and GERD . We used the eQTL data of whole blood, liver, stomach, and esophagus-related tissues from GTEx v8 and cis -eQTL data of whole blood from the eQTLGen consortium . The heterogeneity in dependent instruments (HEIDI) test was conducted to distinguish pleiotropy or causality from linkage. We primarily focused on the genes with FDR < 0.05 and passed the P value thresholds for the HEIDI test ( P HEIDI > 0.05) .
To gain shared biological insights into cholelithiasis and GERD, we conducted functional annotation of the pleiotropic SNPs and shared genes using multiple methods. We utilized the knowledge-based Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) databases to perform pathway enrichment analyses for identifying pathways associated with these genes, using the ClusterProfiler R package ( RRID:SCR_016884 ) . P values from the pathway enrichment analyses were adjusted for multiple comparisons through the FDR approach. In addition, we utilized the STRING database to find the interactions mapped to the pleiotropic SNPs and shared functional genes.
Observational association between cholelithiasis and GERD Baseline characteristics of the study cohort by cholelithiasis are presented in . In total, participants were followed for 2,736,451 person-years, during which 1,628 cholelithiasis patients and 20,780 non-cholelithiasis individuals developed GERD (Table ). In the age/sex-adjusted model, the risk of GERD was 2.28 times higher in cholelithiasis patients compared to those without cholelithiasis. In the fully adjusted model, the risk of GERD remained statistically significant in cholelithiasis patients (HR = 1.99; 95% CI, 1.89–2.10; P < 0.001). Moreover, we also observed the association between baseline GERD and incident cholelithiasis, as shown in Table . In the age/sex-adjusted model, the HR for cholelithiasis was 2.69 (95% CI, 2.54–2.84; P < 0.001) for GERD patients. In the fully adjusted model, the GERD group also displayed a significantly increased risk of developing cholelithiasis (HR = 2.30; 95% CI, 2.18–2.44, P < 0.001). Causal association between cholelithiasis and GERD After excluding the confounding instrumental variables, we used 46 cholelithiasis-associated and 20 GERD-associated genetic instruments ( ), respectively, in the analyses and provided evidence for the causal association between cholelithiasis and GERD. Genetically determined cholelithiasis has the possibility to increase the risk of GERD by 8% (IVW OR = 1.08; 95% CI, 1.05–1.11; P = 3.70 × 10 –10 ; Fig. , ), which was further validated by the other 3 MR methods and the analyses with a supplementary dataset ( ). Besides, we also conducted a reverse MR analysis and found that genetically predicted GERD could increase the risk of cholelithiasis by 15% (OR = 1.15; 95% CI, 1.02–1.31; P = 0.027) according to IVW method (Fig. , ). This association was further validated through analyses employing the weighted median method and an additional dataset ( – ), although it could not be confirmed using the MR-Egger and the weighted mode methods. The F statistic of each SNP related to cholelithiasis and GERD was larger than the empirical threshold of 10, suggesting little possibility of weak instrument bias ( ). We also performed several sensitive analyses to validate the causal association between cholelithiasis and GERD. Cochran’s Q test in the IVW model and the MR-Egger model suggested a lack of evidence for the existence of heterogeneity in effects across the instrumental variables. The P value of the MR-Egger intercept test was larger than 0.05, which indicated that there was a lower possibility of horizontal pleiotropy in the causal estimates ( – ). The leave-one-out analysis suggested that the observed causal relationship was not influenced by any outliers ( – ). The scatterplots, forest plots, and funnel plots of the MR results are displayed in – . Global and local genetic correlations between cholelithiasis and GERD SNP-based liability-scale heritability h ² for cholelithiasis and GERD was 26.65% and 14.01% when utilizing the univariate LDSC with constraining the intercept. The observed heritability of cholelithiasis and GERD was 6.60% and 7.68% utilizing GNOVA. The cross-trait LDSC suggested that cholelithiasis had a relatively strong positive genetic correlation with GERD, exhibiting a genetic correlation (r g ) of 0.31 and a P value of 2.77 × 10 –27 . After constraining the intercept, the genetic correlation was decreased but remained significant (r g = 0.25, P = 3.90 × 10 –56 ). This finding was consistent with the GNOVA analysis, reflecting a genetic correlation (r g ) of 0.26 and a P value of 2.50 × 10 –32 (Table ). We also tested the local genetic correlation by ρ-HESS and GWAS-PW ( ). Seven suggestively significant regions were identified by ρ-HESS, and 8 significant regions were identified by GWAS-PW. Four regions were overlapped according to ρ-HESS and GWAS-PW. These findings suggested a potential shared genetic foundation, necessitating further exploration to elucidate the underlying biological mechanisms. Identification of shared risk loci for cholelithiasis and GERD MTAG identified 8 independent pleiotropic loci (rs146812426, rs4299376, rs6733452, rs7596134, rs4681515, rs9297994, rs10935762, rs3922717), which were also significant in CPASSOC (Table , ). CPASSOC found 23 pleiotropic loci, 5 of which were significant in MTAG, including rs9297994, rs10935762, rs3922717, rs12633863, and rs802036 (Table , ). Overall, 10 independently significant loci have been identified as shared between cholelithiasis and GERD by both MTAG and CPASSOC, namely, rs146812426, rs4299376, rs6733452, rs7596134, rs10935762, rs12633863, rs4681515, rs3922717, rs802036, and rs9297994, which mapped to 9 genes, including PLEKHH2, ABCG8, DYNC2LI1, ABCG5, TM4SF4, LOC100270746, CROT, UBXN2B , and CYP7A1 (Table ). It is worth noting that 5 novel pleiotropic loci were identified in the CPASSOC analysis, including rs10167227, rs6742945, rs335208, rs72664027, and rs11537754, which mapped to genes PNPT1, LOC105369165, PRDM6, LINC02842 , and RAB11FIP3 , respectively (Table , ). Other SNP-associated genes are listed in – . After multiple corrections, the pathway enrichment analysis using the KEGG database identified 5 pathways according to the above genes, including cholesterol metabolism, bile secretion, fat digestion and absorption, ABC transporters, and primary bile acid biosynthesis (Fig. , ). The pathway enrichment analysis using the GO database identified 65 biological processes, 2 cellular components, and 8 molecular functions; most of these pathways are related to lipid and bile acid metabolism (Fig. , ). In the network analysis, we observed a close association among TM4SF4, CYP7A1, ABCG5 , and ABCG8 (Fig. ). Identification of shared genes for cholelithiasis and GERD Results from tissue-specific TWAS and SMR revealed gene-level genetic overlap. After FDR corrections, a total of 15 genes were shared by cholelithiasis and GERD and enriched in 6 tissues, including blood, liver, esophagus mucosa, esophagus muscularis, esophagus gastroesophageal junction, and stomach in the TWAS analysis ( ). Among them, 7 genes significantly overlapped in 2 or more tissues. Five of 7 genes ( SUN2, CBY1, JOSD1, DDX17, FAM227A ) were located in 22q13.1. The TWAS analysis showed that overexpression of SUN2, JOSD1 , and CBY1 was negatively associated with the risk of cholelithiasis and GERD in the blood and esophagus-related tissues, while overexpression of JOSD1 and CBY1 was positively associated with these 2 diseases in the liver tissue. SUN2, JOSD1 , and CBY1 also displayed a significant SMR association signal with FDR < 0.05 and passed the HEIDI-outlier test in blood, esophagus mucosa, and esophagus muscularis ( ). No significant shared causal gene was found in other tissues, namely liver, esophagus gastroesophageal junction, and stomach, according to SMR results. Using the KEGG database, we found 2 significantly enriched pathways, Wnt signaling pathway ( CBY1 ) and cytoskeleton in muscle cells ( SUN2 ) (Fig. , ). Wnt signaling pathway ( CBY1 ) also enriched significantly in the GO pathway enrichment analysis, which is shown in Fig. and . In the network analysis, we did not identify the association between these 3 shared genes.
Baseline characteristics of the study cohort by cholelithiasis are presented in . In total, participants were followed for 2,736,451 person-years, during which 1,628 cholelithiasis patients and 20,780 non-cholelithiasis individuals developed GERD (Table ). In the age/sex-adjusted model, the risk of GERD was 2.28 times higher in cholelithiasis patients compared to those without cholelithiasis. In the fully adjusted model, the risk of GERD remained statistically significant in cholelithiasis patients (HR = 1.99; 95% CI, 1.89–2.10; P < 0.001). Moreover, we also observed the association between baseline GERD and incident cholelithiasis, as shown in Table . In the age/sex-adjusted model, the HR for cholelithiasis was 2.69 (95% CI, 2.54–2.84; P < 0.001) for GERD patients. In the fully adjusted model, the GERD group also displayed a significantly increased risk of developing cholelithiasis (HR = 2.30; 95% CI, 2.18–2.44, P < 0.001).
After excluding the confounding instrumental variables, we used 46 cholelithiasis-associated and 20 GERD-associated genetic instruments ( ), respectively, in the analyses and provided evidence for the causal association between cholelithiasis and GERD. Genetically determined cholelithiasis has the possibility to increase the risk of GERD by 8% (IVW OR = 1.08; 95% CI, 1.05–1.11; P = 3.70 × 10 –10 ; Fig. , ), which was further validated by the other 3 MR methods and the analyses with a supplementary dataset ( ). Besides, we also conducted a reverse MR analysis and found that genetically predicted GERD could increase the risk of cholelithiasis by 15% (OR = 1.15; 95% CI, 1.02–1.31; P = 0.027) according to IVW method (Fig. , ). This association was further validated through analyses employing the weighted median method and an additional dataset ( – ), although it could not be confirmed using the MR-Egger and the weighted mode methods. The F statistic of each SNP related to cholelithiasis and GERD was larger than the empirical threshold of 10, suggesting little possibility of weak instrument bias ( ). We also performed several sensitive analyses to validate the causal association between cholelithiasis and GERD. Cochran’s Q test in the IVW model and the MR-Egger model suggested a lack of evidence for the existence of heterogeneity in effects across the instrumental variables. The P value of the MR-Egger intercept test was larger than 0.05, which indicated that there was a lower possibility of horizontal pleiotropy in the causal estimates ( – ). The leave-one-out analysis suggested that the observed causal relationship was not influenced by any outliers ( – ). The scatterplots, forest plots, and funnel plots of the MR results are displayed in – .
SNP-based liability-scale heritability h ² for cholelithiasis and GERD was 26.65% and 14.01% when utilizing the univariate LDSC with constraining the intercept. The observed heritability of cholelithiasis and GERD was 6.60% and 7.68% utilizing GNOVA. The cross-trait LDSC suggested that cholelithiasis had a relatively strong positive genetic correlation with GERD, exhibiting a genetic correlation (r g ) of 0.31 and a P value of 2.77 × 10 –27 . After constraining the intercept, the genetic correlation was decreased but remained significant (r g = 0.25, P = 3.90 × 10 –56 ). This finding was consistent with the GNOVA analysis, reflecting a genetic correlation (r g ) of 0.26 and a P value of 2.50 × 10 –32 (Table ). We also tested the local genetic correlation by ρ-HESS and GWAS-PW ( ). Seven suggestively significant regions were identified by ρ-HESS, and 8 significant regions were identified by GWAS-PW. Four regions were overlapped according to ρ-HESS and GWAS-PW. These findings suggested a potential shared genetic foundation, necessitating further exploration to elucidate the underlying biological mechanisms.
MTAG identified 8 independent pleiotropic loci (rs146812426, rs4299376, rs6733452, rs7596134, rs4681515, rs9297994, rs10935762, rs3922717), which were also significant in CPASSOC (Table , ). CPASSOC found 23 pleiotropic loci, 5 of which were significant in MTAG, including rs9297994, rs10935762, rs3922717, rs12633863, and rs802036 (Table , ). Overall, 10 independently significant loci have been identified as shared between cholelithiasis and GERD by both MTAG and CPASSOC, namely, rs146812426, rs4299376, rs6733452, rs7596134, rs10935762, rs12633863, rs4681515, rs3922717, rs802036, and rs9297994, which mapped to 9 genes, including PLEKHH2, ABCG8, DYNC2LI1, ABCG5, TM4SF4, LOC100270746, CROT, UBXN2B , and CYP7A1 (Table ). It is worth noting that 5 novel pleiotropic loci were identified in the CPASSOC analysis, including rs10167227, rs6742945, rs335208, rs72664027, and rs11537754, which mapped to genes PNPT1, LOC105369165, PRDM6, LINC02842 , and RAB11FIP3 , respectively (Table , ). Other SNP-associated genes are listed in – . After multiple corrections, the pathway enrichment analysis using the KEGG database identified 5 pathways according to the above genes, including cholesterol metabolism, bile secretion, fat digestion and absorption, ABC transporters, and primary bile acid biosynthesis (Fig. , ). The pathway enrichment analysis using the GO database identified 65 biological processes, 2 cellular components, and 8 molecular functions; most of these pathways are related to lipid and bile acid metabolism (Fig. , ). In the network analysis, we observed a close association among TM4SF4, CYP7A1, ABCG5 , and ABCG8 (Fig. ).
Results from tissue-specific TWAS and SMR revealed gene-level genetic overlap. After FDR corrections, a total of 15 genes were shared by cholelithiasis and GERD and enriched in 6 tissues, including blood, liver, esophagus mucosa, esophagus muscularis, esophagus gastroesophageal junction, and stomach in the TWAS analysis ( ). Among them, 7 genes significantly overlapped in 2 or more tissues. Five of 7 genes ( SUN2, CBY1, JOSD1, DDX17, FAM227A ) were located in 22q13.1. The TWAS analysis showed that overexpression of SUN2, JOSD1 , and CBY1 was negatively associated with the risk of cholelithiasis and GERD in the blood and esophagus-related tissues, while overexpression of JOSD1 and CBY1 was positively associated with these 2 diseases in the liver tissue. SUN2, JOSD1 , and CBY1 also displayed a significant SMR association signal with FDR < 0.05 and passed the HEIDI-outlier test in blood, esophagus mucosa, and esophagus muscularis ( ). No significant shared causal gene was found in other tissues, namely liver, esophagus gastroesophageal junction, and stomach, according to SMR results. Using the KEGG database, we found 2 significantly enriched pathways, Wnt signaling pathway ( CBY1 ) and cytoskeleton in muscle cells ( SUN2 ) (Fig. , ). Wnt signaling pathway ( CBY1 ) also enriched significantly in the GO pathway enrichment analysis, which is shown in Fig. and . In the network analysis, we did not identify the association between these 3 shared genes.
To our knowledge, this is the first study to comprehensively explore the observational, causal, and genetic relationships between cholelithiasis and GERD. By leveraging UK Biobank data and GWAS data, we found the bidirectional causal relationship between cholelithiasis and GERD. The subsequent genetic analyses provided new insights into their shared genetic basis and related biological mechanism, which may contribute to the prediction, diagnosis, and treatment of these diseases. Previous research has reported that cholelithiasis and GERD shared numerous common etiological risk factors such as obesity , type 2 diabetes mellitus , depression , and smoking . We conducted a Cox proportional hazards regression model analysis using the UKB cohort, with adjustments for a wide range of established and potential confounders associated with these 2 conditions. Although the HRs were slightly attenuated after controlling the covariates, the bidirectional association between cholelithiasis and GERD remained statistically significant. This is consistent with the findings by Unalp-Arida et al. and Portincasa et al. , which reported a statistically significant association between cholelithiasis and GERD. Subsequently, using the MR approach, we identified bidirectional causality between cholelithiasis and GERD, while the pathophysiologic mechanisms underlying the causal relationship remain unclear. Previous studies suggested that patients with gallstones showed impaired gastric motility , which might be related to the pathogenesis of GERD. Meanwhile, patients with GERD presented a higher incidence of gallbladder dyskinesia , which may be attributed to the routine use of proton pump inhibitors (PPIs) in GERD treatment. It has been reported that PPIs could reduce the release of cholecystokinin, which might diminish gallbladder motility, thereby causing the formation of gallstones . The current evidence indicated the potential shared pathogenesis or genetic basis between cholelithiasis and GERD, warranting further exploration. In the analysis of heritability and genetic correlation, the heritability of cholelithiasis and GERD was estimated to be 17% and 13%, respectively, indicating a significant genetic contribution to the etiology of both diseases, consistent with previous studies . The genetic correlation between cholelithiasis and GERD was 0.31, suggesting a moderate to strong genetic association between these conditions. The finding supports the hypothesis that genetic factors, such as local genetic correlations, shared loci, and common functional genes, play an important role in the co-occurrence of cholelithiasis and GERD. We identified 4 regions that exhibited a suggestively significant local genetic association, as evidenced by ρ-HESS < 0.05 and GWAS-PW > 0.5. Most loci identified by MTAG and CPASSOC were situated within these regions. Moreover, we found that 22q13.1 might be a shared region between gallstone disease and GERD by combining analyses of local genetic correlation, shared loci, and shared genes: First, this region showed suggestively significant local genetic association between cholelithiasis and GERD using GWAS-PW. Second, the shared loci rs1056661, identified by CPASSOC, were located within this region. Third, 5 and 3 overlapped genes, identified from TWAS analysis and SMR, respectively, were situated within this region. Previous studies have reported that several significant loci related to gallstone disease, including rs12004, rs41281265, and rs1946990, were in this region . However, currently, there is no research linking this region to GERD. Future research is warranted to delve deeper into this specific region to elucidate the genetic correlation between gallstone disease and GERD. Given the significant genetic correlation observed, we conducted cross-trait GWAS meta-analyses to detect risk SNPs underlying the joint phenotypes of cholelithiasis–GERD. We identified 10 shared independently significant loci through MTAG and CPASSOC. According to the results of pathway enrichment analyses, the genes associated with these loci were enriched in pathways related to lipid and bile acid metabolism, including cholesterol metabolism, bile secretion, ABC transporters, and primary bile acid biosynthesis. Several studies have reported that aberrant lipid and bile acid metabolism contributes to the development of both cholelithiasis and GERD . ABCG5 (index SNP: rs7596134), ABCG8 (index SNP: rs4299376 and rs6733452), and CYP7A1 (index SNP: rs9297994) are associated with lipid metabolism. Numerous investigations have suggested the involvement of these genes in the development of gallstone disease . Although several studies have reported that obesity and dyslipidemia are risk factors for GERD, no research has investigated the involvement of these genes in GERD. Therefore, the relationship between these genes and GERD warrants further investigation. Additionally, 5 new loci associated with cholelithiasis and GERD were identified via CPASSOC analysis. PNPT1 (index SNP: rs10167227) is associated with the mitochondrial respiratory chain, and mutations in PNPT1 can lead to mitochondrial dysfunction, subsequently causing neuromuscular dysfunction, which affects the peristaltic function of the gastrointestinal tract . The functions of long noncoding RNA (lncRNA) gene LINC02842 (index SNP: rs72664027) and noncoding RNA (ncRNA) gene LOC105369165 (index SNP: rs6742945) remain unclear, but research has suggested that lncRNAs might have a crucial role in the dysfunction of the lower esophageal sphincter (LES) , potentially shedding light on the onset of GERD. Additional research is required to offer more detailed functional annotation of these shared loci. In addition to detecting shared loci, we also explored whether the cholelithiasis–GERD association can be mediated by shared risk genes through TWAS and SMR analysis. In general, we identified 3 putatively functional genes shared between cholelithiasis and GERD, including SUN2, CBY1 , and JOSD1 , overexpression of which was negatively associated with the risk of cholelithiasis and GERD in the esophagus-related tissues. Prior research has reported the negative effect of CBY1 and SUN2 genes on tumorigenesis , which implied a potential role of them in the pathogenesis of gallstone disease and GERD, given that these 2 diseases are risk factors for gallbladder and esophageal cancer, respectively . Furthermore, existing studies suggested the involvement of bile acids in GERD progression through the activation of the Wnt/β-catenin pathway . CBY1 might be involved in the linkage between gallstone disease and GERD, as it can inhibit the Wnt/β-catenin pathway , which was enriched according to the results of the pathway enrichment analyses. JOSD1 is a deubiquitinating enzyme, playing a pivotal role in many cellular biological processes . Our findings imply that JOSD1 may play a significant role in the associative mechanisms between cholelithiasis and GERD via the deubiquitination processes. In general, our study offers novel insights into the underlying shared genetic basis of cholelithiasis and GERD, and additional research is required for a more profound elucidation.
In our study, we conducted the largest prospective study assessing the phenotypic association between cholelithiasis and incident GERD. Besides, we performed a series of sensitive analyses and further applied validation datasets in MR estimates to enhance the robustness of our results. Furthermore, genetic correlation, pleiotropic loci, and gene detection were fully analyzed by 2 different approaches. The convergent evidence acquired through these dual approaches reinforces the reliability of our findings. However, several limitations need to be acknowledged. First, the causal relationship from GERD to cholelithiasis was not significant in all sensitivity analyses, which may be attributed to the limitations of GWAS statistics. Therefore, larger and more powerful GWAS data for cholelithiasis and GERD will be needed to establish the causal relationships from GERD to cholelithiasis. Second, all the data used in this study came from European ancestry populations, which limited the extension of our findings to other ethnic populations; thus, future studies involving a broader range of ancestries are warranted. Third, due to limited GWAS data availability at the time of conducting the analysis, we were unable to perform a deeper subgroup analysis based on the stratification information, such as age, gender, and severity of the disease.
In summary, we found a bidirectional association between cholelithiasis and GERD, which may be attributed to a bidirectional causal relationship and a shared genetic basis, including the significant genetic correlation, novel shared loci, and genes. Our findings provided new insights into the biological mechanisms for cholelithiasis and GERD and suggested promising therapeutic targets, which might provide an innovative research direction for future therapeutic strategy and risk prediction.
giaf023_Supplemental_Files giaf023_GIGA-D-24-00123_Original_Submission giaf023_GIGA-D-24-00123_Revision_1 giaf023_Response_to_Reviewer_Comments_Original_Submission giaf023_Reviewer_1_Report_Original_Submission Jian Zeng -- 7/9/2024 giaf023_Reviewer_1_Report_Revision_1 Jian Zeng -- 12/15/2024 giaf023_Reviewer_2_Report_Original_Submission Mangala Hegde -- 8/12/2024
|
Comparison of the effects of corneal and lacrimal gland denervation
on the lacrimal functional unit of rats | e04ef084-3942-4370-aad9-cf9ce91c8adb | 11826637 | Surgical Procedures, Operative[mh] | The sensory and autonomic neural network supports the ocular surface (OS), and therefore, the diseases that target the neural network can cause dry eye syndrome (DES) and ocular surface structural and functional disruptions . The neural damage of the cornea (CO) sensory network (sensory or afferent denervation, SD) or the autonomic, efferent neural damage (AD) to the lacrimal gland (LG) have common features, such as aqueous tear deficiency, OS sensitivity and inflammation, and corneal epitheliopathy, as evidenced in studies regarding the lacrimal functional unit (LFU) . Although clinical findings could overlap, and certain conditions affect both the afferent and efferent pathways, the characteristics that distinguish sensory and autonomic LFU damage are unknown . Notably, DES and OS disease affects millions of people worldwide, causing discomfort, visual impairment, eye integrity compromise, and being more frequently associated with other diseases . We hypothesized that understanding the distinctive aspects of each condition that causes DES and OS disease could help identify more specific diagnostic and therapeutic modalities because even though currently there are several diagnostic tests and treatments for DES, they have low predictive value and efficacy . Notably, benzalkonium chloride (BAK), widely used as a preservative in topical eye medications, induces OS toxicity, and causes DES . BAK topical use induces DES, keratitis, increased cytokine expression, inflammatory cell infiltration on the corneal and conjunctival tissues, and squamous metaplasia. Notably, mice models have evidenced these effects could propagate to the trigeminal ganglion (TG) . LG denervation has been widely used since the 1940s as a “cure” for epiphora and tearing caused by environmental factors. Current evidence indicates that neural damage to the exocrine glands, including the LG, in humans and other species is associated with autonomic dysfunction, local inflammation, and secretory impairment and plays a part in the DES mechanism, such as in Sjögren’s syndrome . This study evaluated the functional and molecular effects of DES caused by BAK (afferent or sensory denervation, SD) or LG nerve ablation (LG efferent or autonomic denervation, AD) in rats. We hypothesized that SD and AD have distinct mechanistic features, despite their similar clinical presentations. Moreover, a deeper investigation into the physiopathology of SD and AD diseases would facilitate better diagnostic, predictive, and therapeutic approaches. The study objectives were to compare the SD and AD models with control rats in terms of tear flow, corneal sensitivity triggered by capsaicin, and the mRNA expression of inflammatory cytokines in CO, LG, and TG and that of the tissue repair mediators in the LG. Animals and study design All experimental procedures adhered to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and were approved by the committee for animal use at the Ribeirao Preto School of Medicine, University of Sao Paulo (Ribeirão Preto, SP, Brazil) (Protocol 109/2008). Wistar male 8-week-old rats, weighing 220-250 g obtained from the Animal Breeding Center of the Ribeirao Preto Medical School, University of Sao Paulo, were divided into three groups, with the AD group subdivided further into two based on the endpoints. Hence, overall, the following four groups were analyzed: Sensory denervation (SD): The rats (n=10 animals) received 5 µL of 0.2% BAK twice daily for 7 days in the right eye. BAK was obtained (Fluka Analytical, Sigma-Aldrich Brazil Ltda. COTIA, SP, BRAZIL), diluted in phosphate buffer at 25 ºC and pH 7.2. The procedure began with animal immobilization, instillation of a drop of 0.2% BAK in the right eye, and after 10 seconds, allowing the drug to spread, each rat was returned to its cage. Autonomic denervation (AD): The rats were subjected to surgical denervation of the exorbital LG, and the outcomes were evaluated after 1 and 2 months (n=7 animals per group). LG denervation was performed as follows: Under intramuscular anesthesia with xylazine (Laboratorio Callier S.A., Barcelona, Spain [15 mg/kg]) and ketamine (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil [150 mg/kg]), an aseptic skin incision was performed between the eye and the ear on the right side. The extraorbital LG was identified, the LG nerve branch detected, isolated from the vascular branches, resected, and repositioned, avoiding contact between the cuts, based on techniques previously described . After homeostatic control, the surgical wound was closed with cyanoacrylate glue (Locite, Henkel Ltda, Diadema, SP, Brazil) and covered with a single, 5-mm application of antibiotic and anti-inflammatory ointment (Cylocort, União Química Farmacêutica Nacional S.A, Brasilia, DF, Brazil). The two AD subgroups were evaluated 1 and 2 months after the procedure (n=7 animals per subgroup, AD 1M and AD 2M). Control group (CG): The group without any intervention (naïve) was included for comparison and was evaluated after 5 weeks of housing in the same vivarium (n=16 animals). All rats were housed in cages at a nearly constant temperature (23 ± 2 ºC) in light-dark cycles of 12 h. Animals had ad libitum access to standard rodent chow and water. Eye wipe test At the end of the experimental period for each group, namely 7 days for the SD group, 1 month and 2 months for the AD group, and 5 weeks for the CG, the rats were subjected to the eye wipe test in response to capsaicin (CAP) to investigate the CO sensitivity. After acclimation of the animals to Plexiglas chambers for 1 hour, the right eye of all rats were instilled with 20 µL of 10 µM CAP diluted in PBS at pH 7.2 and 25 ºC (Sigma-Aldrich Brazil Ltda., Cotia, SP, BRAZIL). The eye wipe behavior was recorded using a digital camera (DSC-W5, Sony, Japan) for 3 min after the instillation of CAP. Eye wipe movements (EWT) recorded for 3 min, starting after the CAP eye drop instillation, were analyzed based on the digital recording of each rat by a masked observer with an iMac computer (Apple Inc, Cupertino, CA, USA) and compared with the CG. Clinical evaluation Furthermore, to investigate the effects of SD and AD on the CO and tear flow, the animals were evaluated under general anesthesia after an intraperitoneal injection of ketamine (5 mg/100 g body weight) (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil) and xylazine (2 mg/100 g body weight) (Laboratorio Callier S.A., Barcelona, Spain) to collect the following observations: Corneal epithelial integrity was evaluated using slit-lamp after 2% sodium fluorescein dye staining. The punctate keratitis was graded from 0 to 15, as previously described . In addition, the presence of epithelial defects was examined. Tear flow was measured in millimeters using the red phenol thread (RPT) for 30 seconds, and the values obtained were compared among the groups (Showa Yakuhin Kako Co; Ltd, Tokyo, Japan & Menicon USA Inc., Clovis, CA, USA). Notably, the duration of the experimental period was different for SD, AD, and CG after the rats completed 8 weeks of life, and they were housed in the experimental vivarium. It was 7 days for the SD group during the BAK use, 1 and 2 months for the AD group (counting from the day of LG nerve ablation), and 5 weeks for the CG (counting from the day they were relocated to the experimental vivarium). The reason for the different durations was based on observations from previously published works and pilot studies that indicated the requisite duration for obtaining the ocular manifestations . Notably, longer periods of BAK would induce excessive toxicity, and interrupting its use would revert the treatment effects . Moreover, earlier observation of surgical denervation would only reveal the inflammatory effects of the procedure. The experimental period for the CG was selected as an intermediary period to SD and 2-month AD . Quantitative real-time PCR After in vivo observations, the animals were euthanized using ketamine (5 mg/100 g body weight) (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil), xylazine (2 mg/100 g body weight) (Laboratorio Callier S.A., Barcelona, Spain), and thiopental sodium (1000 mg/kg) (Laboratório Cristália, São Paulo, SP, Brazil). The CO, LG, and TG tissues were harvested from the right side of the rats of all three groups and imbedded in RNA stabilization solution (RNAlater Solution, Ambion, Waltham, MA, USA) and stored at -80 ºC until RNA extraction, quantification, quality evaluation, and quantitative real-time PCR (qPCR) analysis. The relative expressions of the mRNA of proinflammatory cytokines, such as Il1β, Il-6, Tnf, and Mmp9, in the LG, CO, and TG samples obtained from all three study groups were compared. In addition, the relative mRNA expressions of the tissue repair elements in the LG, namely the Bmp7, Runx1, Runx3, Fgf10, and Smad1, were compared among the three groups. The qPCR was performed using hydrolysis probes (Applied Biosystems, Carlsbad, CA, USA). Total RNA samples were extracted from the tissues by using RNeasy Mini Kit (Qiagen, Germantown, MD, USA), according to the manufacturer’s instructions, and was quantified using a NanoDrop 2000c spectrophotometer (Thermo Scientific, Wilmington, DE, USA). Samples containing 500 ng of total RNA of CO tissue, 1000 ng of total RNA of LG tissue, and 350 ng of TG of AD group and 150 ng of SD group were used to synthesize the cDNA with the QuantiTect Reverse Transcription Kit (Qiagen, Germantown, MD, USA) in the ProFlex PCR System (Applied Biosystems, Carlsbad, CA, USA). The qPCR was performed using the ViiA7 Real-Time PCR System (Applied Biosystems, Carlsbad, CA, USA). The following hydrolysis probes were used in this study: Rn.PT 5838028824 ( Il-1 β), Rn.PT 5813840513 ( Il-6 ), Rn.PT 5811142874 ( Tnf ), Rn.PT 587383134 ( Mmp9 ), Rn.PT 5810180444 ( Bmp7 ), Rn.PT 5810814634 ( Fgf10 ), Rn.PT 589220704.g (β-actin) (all these from IDT); Rn00565555_m1 ( Smad1 ), Rn00569082_m1 ( Runx1 ), Rn00590466_m1 ( Runx3 ) (Applied Biosystems, Carlsbad, CA, USA). Each amplification reaction was performed in duplicate with 5.5 µL of QuantiNova Probe PCR Kit (Qiagen, Germantown, MD, USA), 0.5 µL of hydrolysis probe, and 4.5 µL of 1:4 dilution of the cDNA in a total volume of 10 µL. The cycles for real-time PCR were as follows: one cycle of 95 ºC for 2 minutes, 50 cycles of 5 seconds at 95 ºC and 19 seconds at 60 ºC. The relative quantification was determined using the Thermo Fisher Cloud Software, RQ version 3.7 (Life Technologies Corporation, Carlsbad, CA, USA). Statistical analysis The software GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA) was used to obtain descriptive statistics and compare the response to the EWT, RPT tests, and laboratory exams among the SD, AD, and CG groups by using the non-parametric, one-tailed, and Mann-Whitney U statistical tests. The level of significance was set at p <0.05. All experimental procedures adhered to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and were approved by the committee for animal use at the Ribeirao Preto School of Medicine, University of Sao Paulo (Ribeirão Preto, SP, Brazil) (Protocol 109/2008). Wistar male 8-week-old rats, weighing 220-250 g obtained from the Animal Breeding Center of the Ribeirao Preto Medical School, University of Sao Paulo, were divided into three groups, with the AD group subdivided further into two based on the endpoints. Hence, overall, the following four groups were analyzed: Sensory denervation (SD): The rats (n=10 animals) received 5 µL of 0.2% BAK twice daily for 7 days in the right eye. BAK was obtained (Fluka Analytical, Sigma-Aldrich Brazil Ltda. COTIA, SP, BRAZIL), diluted in phosphate buffer at 25 ºC and pH 7.2. The procedure began with animal immobilization, instillation of a drop of 0.2% BAK in the right eye, and after 10 seconds, allowing the drug to spread, each rat was returned to its cage. Autonomic denervation (AD): The rats were subjected to surgical denervation of the exorbital LG, and the outcomes were evaluated after 1 and 2 months (n=7 animals per group). LG denervation was performed as follows: Under intramuscular anesthesia with xylazine (Laboratorio Callier S.A., Barcelona, Spain [15 mg/kg]) and ketamine (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil [150 mg/kg]), an aseptic skin incision was performed between the eye and the ear on the right side. The extraorbital LG was identified, the LG nerve branch detected, isolated from the vascular branches, resected, and repositioned, avoiding contact between the cuts, based on techniques previously described . After homeostatic control, the surgical wound was closed with cyanoacrylate glue (Locite, Henkel Ltda, Diadema, SP, Brazil) and covered with a single, 5-mm application of antibiotic and anti-inflammatory ointment (Cylocort, União Química Farmacêutica Nacional S.A, Brasilia, DF, Brazil). The two AD subgroups were evaluated 1 and 2 months after the procedure (n=7 animals per subgroup, AD 1M and AD 2M). Control group (CG): The group without any intervention (naïve) was included for comparison and was evaluated after 5 weeks of housing in the same vivarium (n=16 animals). All rats were housed in cages at a nearly constant temperature (23 ± 2 ºC) in light-dark cycles of 12 h. Animals had ad libitum access to standard rodent chow and water. At the end of the experimental period for each group, namely 7 days for the SD group, 1 month and 2 months for the AD group, and 5 weeks for the CG, the rats were subjected to the eye wipe test in response to capsaicin (CAP) to investigate the CO sensitivity. After acclimation of the animals to Plexiglas chambers for 1 hour, the right eye of all rats were instilled with 20 µL of 10 µM CAP diluted in PBS at pH 7.2 and 25 ºC (Sigma-Aldrich Brazil Ltda., Cotia, SP, BRAZIL). The eye wipe behavior was recorded using a digital camera (DSC-W5, Sony, Japan) for 3 min after the instillation of CAP. Eye wipe movements (EWT) recorded for 3 min, starting after the CAP eye drop instillation, were analyzed based on the digital recording of each rat by a masked observer with an iMac computer (Apple Inc, Cupertino, CA, USA) and compared with the CG. Furthermore, to investigate the effects of SD and AD on the CO and tear flow, the animals were evaluated under general anesthesia after an intraperitoneal injection of ketamine (5 mg/100 g body weight) (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil) and xylazine (2 mg/100 g body weight) (Laboratorio Callier S.A., Barcelona, Spain) to collect the following observations: Corneal epithelial integrity was evaluated using slit-lamp after 2% sodium fluorescein dye staining. The punctate keratitis was graded from 0 to 15, as previously described . In addition, the presence of epithelial defects was examined. Tear flow was measured in millimeters using the red phenol thread (RPT) for 30 seconds, and the values obtained were compared among the groups (Showa Yakuhin Kako Co; Ltd, Tokyo, Japan & Menicon USA Inc., Clovis, CA, USA). Notably, the duration of the experimental period was different for SD, AD, and CG after the rats completed 8 weeks of life, and they were housed in the experimental vivarium. It was 7 days for the SD group during the BAK use, 1 and 2 months for the AD group (counting from the day of LG nerve ablation), and 5 weeks for the CG (counting from the day they were relocated to the experimental vivarium). The reason for the different durations was based on observations from previously published works and pilot studies that indicated the requisite duration for obtaining the ocular manifestations . Notably, longer periods of BAK would induce excessive toxicity, and interrupting its use would revert the treatment effects . Moreover, earlier observation of surgical denervation would only reveal the inflammatory effects of the procedure. The experimental period for the CG was selected as an intermediary period to SD and 2-month AD . After in vivo observations, the animals were euthanized using ketamine (5 mg/100 g body weight) (União Química Farmacêutica S.A, Embu-Guaçu, SP, Brazil), xylazine (2 mg/100 g body weight) (Laboratorio Callier S.A., Barcelona, Spain), and thiopental sodium (1000 mg/kg) (Laboratório Cristália, São Paulo, SP, Brazil). The CO, LG, and TG tissues were harvested from the right side of the rats of all three groups and imbedded in RNA stabilization solution (RNAlater Solution, Ambion, Waltham, MA, USA) and stored at -80 ºC until RNA extraction, quantification, quality evaluation, and quantitative real-time PCR (qPCR) analysis. The relative expressions of the mRNA of proinflammatory cytokines, such as Il1β, Il-6, Tnf, and Mmp9, in the LG, CO, and TG samples obtained from all three study groups were compared. In addition, the relative mRNA expressions of the tissue repair elements in the LG, namely the Bmp7, Runx1, Runx3, Fgf10, and Smad1, were compared among the three groups. The qPCR was performed using hydrolysis probes (Applied Biosystems, Carlsbad, CA, USA). Total RNA samples were extracted from the tissues by using RNeasy Mini Kit (Qiagen, Germantown, MD, USA), according to the manufacturer’s instructions, and was quantified using a NanoDrop 2000c spectrophotometer (Thermo Scientific, Wilmington, DE, USA). Samples containing 500 ng of total RNA of CO tissue, 1000 ng of total RNA of LG tissue, and 350 ng of TG of AD group and 150 ng of SD group were used to synthesize the cDNA with the QuantiTect Reverse Transcription Kit (Qiagen, Germantown, MD, USA) in the ProFlex PCR System (Applied Biosystems, Carlsbad, CA, USA). The qPCR was performed using the ViiA7 Real-Time PCR System (Applied Biosystems, Carlsbad, CA, USA). The following hydrolysis probes were used in this study: Rn.PT 5838028824 ( Il-1 β), Rn.PT 5813840513 ( Il-6 ), Rn.PT 5811142874 ( Tnf ), Rn.PT 587383134 ( Mmp9 ), Rn.PT 5810180444 ( Bmp7 ), Rn.PT 5810814634 ( Fgf10 ), Rn.PT 589220704.g (β-actin) (all these from IDT); Rn00565555_m1 ( Smad1 ), Rn00569082_m1 ( Runx1 ), Rn00590466_m1 ( Runx3 ) (Applied Biosystems, Carlsbad, CA, USA). Each amplification reaction was performed in duplicate with 5.5 µL of QuantiNova Probe PCR Kit (Qiagen, Germantown, MD, USA), 0.5 µL of hydrolysis probe, and 4.5 µL of 1:4 dilution of the cDNA in a total volume of 10 µL. The cycles for real-time PCR were as follows: one cycle of 95 ºC for 2 minutes, 50 cycles of 5 seconds at 95 ºC and 19 seconds at 60 ºC. The relative quantification was determined using the Thermo Fisher Cloud Software, RQ version 3.7 (Life Technologies Corporation, Carlsbad, CA, USA). The software GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA) was used to obtain descriptive statistics and compare the response to the EWT, RPT tests, and laboratory exams among the SD, AD, and CG groups by using the non-parametric, one-tailed, and Mann-Whitney U statistical tests. The level of significance was set at p <0.05. Upon corneal slit-lamp examination, all groups revealed mild keratitis and neovascularization ( and ). Regarding EWT, the SD group exhibited a higher frequency of paw eye wipe movements after CAP sensitization compared with CG (p=0.001), indicating hyperalgesia after the BAK-induced corneal nerve damage. The AD 1M and 2M subgroups did not exhibit significant differences regarding the EWT compared with the CG ( and ). The tear flow measured using the Phenol red thread test (PRT) revealed lower median levels in SD and AD 2M groups, but not significantly different from CG ( and ). The qPCR analysis of proinflammatory cytokine mRNA revealed that SD decreased the Tnf mRNA in the CO (p=0.02). Moreover, AD decreased the Mmp9 expression after 1 month (p<0.001), and increased the expressions of Il-1β, Il-6, and Tnf after 2 months (p=0.004, p=0.02, and p<0.001, respectively) ( and ). Furthermore, in the LG, SD increased the mRNA expression of the proinflammatory cytokines Il-1β and Il-6 (p=0.003 and p=0.004, respectively) and reduced the mRNA expression of Mmp9 (p<0.001). AD increased the mRNA expression of Mmp9 after 1 and 2 months (p=0.01 and p=0.006, respectively) ( and ). Upon evaluation of the promitotic mediators in the LG, the SD group exhibited low mRNA expressions of Runx1, Runx3, and Smad1 (p=0.03, p=0.002, and p=0.03, respectively). Moreover, the LG of the AD 1M subgroup exhibited higher mRNA expression of Runx3 (p=0.001) and that of AD 2M revealed lower mRNA expression of Bmp7 and Fgf10 (p=0.05, p=0.008) ( and ). The proinflammatory cytokine mRNA in the TG revealed the following profile: SD group exhibited higher mRNA expression levels of Il-1β and Tnf (p=0.01 and p=0.04) ( and ). Nevertheless, the TG of the AD 1M and 2M subgroups did not reveal any changes related to the proinflammatory cytokines (data not shown). The present work revealed that SD of the CO and AD of the LG induced inflammatory changes in the CO and LG, but only SD increased the proinflammatory markers in the TG, which was accordant with the manifestation of hyperesthesia, in response to CAP in the CO, in the SD model. By contrast, AD of the LG induced the expression of the promitotic Runx3 in the LG. These observations indicate that neural damage promotes a proinflammatory deviation, extending to other organs of the LFU. The failure in inducing low tear flow with BAK, in contrast to previous studies that noted impaired tear flow, increased epitheliopathy, and corneal hypersensitivity, could be explained based on the low sensitivity and poor correlation among the DES methods, influenced by external factors like anesthesia and environmental humidity . The SD of the CO induced by BAK increased the expression of proinflammatory mediators, not just in the CO, as evidenced previously in mice, but also in the LG . Moreover, this finding was concordant with previous studies, which observed that topical BAK use and alkali burn of the CO caused TG inflammation in mice . The hyperesthesia, demonstrated by the higher numbers of EWT in response to CAP, observed in the SD model and not in the AD model, accord with the mechanism of trigeminal pain because of persistent inflammation . Furthermore, AD through LG nerve ablation at 1 and 2 months preserved the tear flow and normal CO sensitivity to CAP in the rats, and this was concordant with a previous work that used saporin toxin to induce LG denervation and observed CO hypersensitivity to menthol but not to CAP . The AD of the LG induced few changes in the LFU after 1 month, evidenced by a modest increase in the proinflammatory Mmp9 and an increase in the promitotic Runx3, suggesting an attempt to regenerate the LG tissue after the initial period of surgical manipulation. However, the AD of the LG after 2 months revealed an increase in all the proinflammatory cytokines tested in the CO, even more than the SD, with Mmp9 continuing to rise in the LG. Unlike the SD model presented here or mentioned above, the AD did not alter the mRNA cytokine profile in the TG, confirming the preservation of corneal sensitivity. Notably, the promitotic mediators in the LG increased only in the AD model, suggesting that regenerative mechanisms of the LG are at work after LG denervation but return to the baseline in the second month. These findings are concordant with a previous work, which observed that parasympathetic disruption of the LG perpetuated the expression of proinflammatory cytokines and pro-apoptotic mediators for more than 2 months and impaired the capacity of constitutive proteins synthesis . Notably, the inflammation of LG did not affect the neural network but impaired the tear secretion process mediated by the autonomic network, as observed in mice models of autoimmunity and in vitro studies . Nevertheless, the preservation of the tear flow and CO observed in the AD model could possibly be due to the support of the other LG (i.e., infraand intraorbital) in the rat . The rationale of the present work is to analyze the mechanisms of neural injury of the LFU. In addition, we intended to distinguish the manifestations of SD and AD. In clinical practice, several diseases can disrupt the sensory or the autonomic motor network that supports the LFU, involving the CO and LG. These diseases include diabetes mellitus, herpes zoster, herpes simplex keratitis, Hansen disease, surgeries, trauma, and other diseases that can cause oculomotor, trigeminal, or facial nerve neuropathy. Notably, the limitations in identifying the precise topographic and molecular mechanisms in the clinical setting are associated with the lack of non-invasive methods and case presentations, which are frequently overlapped by severe complications like tissue ulceration and secondary infection. In conclusion, AD and SD models have common features like inflammation of various LFU parts. However, hyperesthesia and inflammatory markers in the TG of the SD model and the expression of regenerative mediators in LG of the AD model are the distinguishing features of these diseases that can be explored by future studies concerning DES secondary to neural damage of the LFU. |
Reducing postoperative blood product usage and costs in cardiothoracic surgery: the implementation of a multispecialty perioperative care model incorporating a haemostasis checklist | f207e061-d095-45e8-a31e-bcbe632da642 | 11751888 | Surgical Procedures, Operative[mh] | Transfusion of blood products after cardiac surgery is associated with poor outcomes and has significant resource implications.
Through a combination of multispecialty interventions and integration of a haemostasis checklist institutions can significantly decrease blood transfusion in the postoperative period, resulting in significant cost savings and likely improved patient outcomes.
Establishing a multispecialty cardiac surgery taskforce was a novel initiative and has allowed a rapid change in practice, implementation of new interventions in a sustainable manner and broken down traditional silos of care. This model may be replicable across other regions/healthcare systems.
The Royal Adelaide Hospital is one of Australia’s largest teaching hospitals and provides tertiary services including cardiothoracic surgical services to the population of South Australia. In 2017, the new Royal Adelaide Hospital opened, marking the completion of one of the most advanced infrastructure projects in the southern hemisphere. With 40 technical suites/operating theatres and 48 critical care beds, the intensive care unit (ICU) cares for over 1700 perioperative patients per year, approximately 700 of which required specialist cardiothoracic surgery and postoperative care. However, in the year following the move, an increase in the number of blood products administered to postoperative cardiac surgical patients was anecdotally observed. Initially, environmental or technical factors that could be related to the relocation were reviewed, but with no significant issues were found. A multispecialty task force was established, and a healthcare improvement project planned to more specifically define the problem prior to testing possible solutions using Plan-Do-Study-Act (PDSA) methodology. This aligned with local health network’s strategic aims of innovative and agile approaches to improving healthcare outcomes, which are patient focused rather than specialty focused.
Cardiothoracic surgery represents a high-risk area for perioperative bleeding and requirement for transfusion of blood products. The unique combination of invasive surgical procedures, necessity of cardiopulmonary bypass (CPB) leading to haemodilution and exposure to extracorporeal circuits and requirement for anticoagulation result in a significant proportion of patients (up to 93%) receiving perioperative red cell transfusion, with use of fresh frozen plasma (FFP) and platelets at around 10% and 30%, respectively. Cardiothoracic surgery alone represents 5% of the total red cells transfused in South Australia with over 2000 red cell transfusions per year. Not only are blood products an expensive and limited resource, requiring a complex process of collection, preparation, storage and administration, but administration of blood products in the cardiothoracic perioperative period is associated with an increase in 30-day mortality, and complications including acute kidney injury, prolonged ventilatory support and increased incidence of infection. Although this association between transfusion with harm does not imply causation, bleeding within the first 12 hours (irrespective of transfusion or return to theatre) is an independent predictor of mortality. Efforts focusing on reducing blood product usage are likely to reduce bleeding in the perioperative period and improve patient outcomes in addition to producing organisational cost savings. Methods to minimise blood product use can be broadly categorised into the preoperative, intraoperative and postoperative phases. Preoperative methods include identification of patients at high risk, optimising anaemia treatment and management of antiplatelet and anticoagulant drugs. Intraoperative methods include the use of antifibrinolytic therapy, implementation of a methodical regime for ensuring haemostasis (discussed below), use of viscoelastic testing and limiting haemodilution. Postoperative methods include early recognition and senior intervention in the event of bleeding, including low threshold for return to theatre, rapid correction and maintenance of normothermia and optimisation of coagulopathy in the event of bleeding. In 2020, the use of a multidisciplinary haemostasis checklist, the ‘Papworth haemostasis checklist’, which includes checking of surgical sites performed prior to sternal closure and triggered reviews of coagulation status, was shown to result in a reduction in blood loss, use of blood products and rates of return to theatre in cardiothoracic surgical patients. However, central to successful and sustainable implementation of the above measures is a multidisciplinary and collaborative approach to haemostasis and blood conservation.
To quantify the initial problem preintervention, all blood product administration within the first 12 hours of cardiothoracic surgery was measured, with data taken from all patients that underwent CPB in the first quarter of 2021 (n=114 patients) stratified by blood product type and presented as units or adult doses (AD) per 100 patients. All postoperative cardiothoracic patients who went on CPB intraoperatively and then admitted to ICU in the study period were included, including emergency admissions. 12 hours ensured coverage of any periods of ongoing transfusion prior to delayed return to theatre. These data were collected using electronic transfusion data from the electronic patient record (units by quantity and type transfused). The study period was predefined to avoid risk of data selection and sampling bias, and periods of normal cardiac workload (therefore excepting periods of COVID-related decreased activity) were preselected for future data collection periods with timeframes to ensure >100 patients per period captured. The total cost of blood product administration was calculated using local blood bank data from 2023. We additionally collected data on blood products requested but not administered, reflective of products wasted in the perioperative period or requested in anticipation of bleeding but not used. Secondary data collected included ICU length of stay and return to theatre rate, collected from the Australian and New Zealand Intensive Care Society and Australian & New Zealand Society of Cardiac & Thoracic Surgeons databases, respectively. These balancing measures were monitored to ensure untoward effects from the intervention were recorded. Initial data demonstrated red cell usage of 47 units per 100 patients at an estimated cost of AU$20 206 per quarter and total blood product use including cryoprecipitate, FFP and platelets at 141 units (or AD) per 100 patients at an estimated cost of AU$71 270 per quarter. The return to theatre rate was 7% during the pre-intervention period.
A collaborative cardiothoracic surgery (CTS) taskforce was established in 2020 focused on reducing bleeding in the perioperative period, and optimising the use of blood products. This included medical leaders from cardiothoracic surgery, cardiology, intensive care, anaesthesia, in addition to senior members of the cardiothoracic nursing team who would meet fortnightly. The planned intervention involved three individual components as summarised in . Box 1 Summary of phases of planned intervention 1. Preoperative phase (ward/outpatients) Identification of high-risk patients (either via preoperative assessment clinic if outpatients, or referral from cardiothoracic team if admitted). Multispecialty team decision on patient disposition involving the cardiologist, cardiothoracic surgeon, anaesthetist and intensivist. 2. Intraoperative phase (theatre) Incorporate the Papworth haemostasis checklist: Chest open—assess and communicate if clinically suspected coagulopathy and use viscoelastic testing. Before sternal closure—second packing to assess bleeding, consultation between consultant anaesthetist and surgeon to agree/modifyplan based on coagulopathy assessment. Before intensive care unit (ICU) transfer—consultant surgeon supervises sternal closure, team review drain output and ensure plan for haemostasis/coagulopathy—‘team time out’. Drains on suction once sternum closed. Ensure pump blood given and activated clotting time (ACT) reviewed prior to leaving theatre. 3. Postoperative phase (ICU) Monitoring temperature: Identify and action hypothermia on arrival. Monitoring bleeding: Excessive and clinically significant bleeding (with defined and specific excessive bleeding thresholds) to empower rapid senior escalation. Identifying coagulopathy Repeat haemoglobin and coagulation studies if bleeding ≥100 mL/30 min after first hour; repeat rotational thromboelastometry. Repeat 2 hourly. Timeline for ACT measurement and correction. Re-exploration for bleeding: Increased risks with delay; low threshold for early re-exploration advised. A process map was created to pictorially represent our intervention . A key concern was whether there would be adequate buy-in from multiple professions given the perioperative period spans several often ‘siloed’ areas of care—outpatients, theatres, intensive care, that may not necessarily see the benefit of their intervention directly. We anticipated that surgical, anaesthetics, intensivist and nursing teams may have developed preferred individual ways of working and potentially feel the intervention was burdensome, prescriptive or unnecessary. To overcome this and plan for sustainability, we used a data-driven communication strategy, utilising local data to create a sense of urgency and demonstrate the need for change alongside emphasising evidence-based clinical and workplace practices. Anaesthesia and nursing teams in practice became key drivers of this intervention on the ground, with buy-in from surgical and intensive care teams supported by presenting the data in the fortnightly ‘CTS taskforce’ meetings. This CTS taskforce formed our ‘winning coalition’ and became key to longer term sustainability, embedding this practice within our department with ongoing work (such as the updated cardiothoracic clinical pathway) building on this healthcare initiative.
Identification of high-risk patients (either via preoperative assessment clinic if outpatients, or referral from cardiothoracic team if admitted). Multispecialty team decision on patient disposition involving the cardiologist, cardiothoracic surgeon, anaesthetist and intensivist.
Incorporate the Papworth haemostasis checklist: Chest open—assess and communicate if clinically suspected coagulopathy and use viscoelastic testing. Before sternal closure—second packing to assess bleeding, consultation between consultant anaesthetist and surgeon to agree/modifyplan based on coagulopathy assessment. Before intensive care unit (ICU) transfer—consultant surgeon supervises sternal closure, team review drain output and ensure plan for haemostasis/coagulopathy—‘team time out’. Drains on suction once sternum closed. Ensure pump blood given and activated clotting time (ACT) reviewed prior to leaving theatre.
Monitoring temperature: Identify and action hypothermia on arrival. Monitoring bleeding: Excessive and clinically significant bleeding (with defined and specific excessive bleeding thresholds) to empower rapid senior escalation. Identifying coagulopathy Repeat haemoglobin and coagulation studies if bleeding ≥100 mL/30 min after first hour; repeat rotational thromboelastometry. Repeat 2 hourly. Timeline for ACT measurement and correction. Re-exploration for bleeding: Increased risks with delay; low threshold for early re-exploration advised.
Our initial objective was to get ‘buy-in’ for the project. Although many of the individual interventions seem relatively straightforward, changing clinician and operating theatre practices is notoriously difficult. Getting all team members to understand the requirement for change and potential benefits of the interventions were essential. We wanted to ensure that these interventions were delivered to every patient consistently. The preoperative plan therefore involved establishing the ‘CTS taskforce’. This would consist of key stakeholders from the clinical areas as described above but may vary by institution. A focused nursing education intervention consisting of a protocolised bleeding assessment and management plan, nursing strategies to minimise postoperative bleeding, and thromboelastography familiarity training was then planned to ensure the teams caring for patients in the postoperative period were aware of our strategies and would advocate for compliance with the intervention. The key here is twofold; optimise postoperative care to not only ensure simple methods to reduce coagulopathy are consistently employed but also raise awareness of thresholds for intervention and escalation in the event of bleeding. Finally, we would then implement the Papworth haemostasis checklist (modified for use within regional processes) and focus on surgical and anaesthetic interventions after ensuring that the previous improvement processes were established. Below we summarise the steps, our institution instigated to make this change. PDSA cycle 1 (2021): our initial intervention aimed to capture high-risk cardiothoracic surgical patients preoperatively. Our assumption was that with early intervention by key parties, we could optimise preoperative risk factors. A protocol as outlined in was established via the CTS taskforce to ensure inpatient and high-risk elective patients were assessed prior to surgery aimed at identifying and treating modifiable factors that may lead to bleeding. While creating closer ties between surgeons, anaesthetists and intensivists preoperatively was helpful for future PDSA cycles, it was difficult to measure the impact of this intervention and colleagues felt this aspect had not yet been successful. This aspect was held to focus on potentially higher yield interventions as described below. PDSA cycle 2 (2021–2022): we then implemented a bundle of nursing education interventions. Sixty senior intensive care nurses specialising in care of the cardiothoracic patient underwent specific training on bleeding (including preventative strategies, early identification and escalation, collaborative management and unused blood return) in addition to thromboelastography training. We hypothesised that interventions in theatre were unlikely to have significant benefit if the postoperative care initiatives were not already established. Feedback from the training sessions was overwhelmingly positive; the team felt empowered by the focused education and assured that bleeding will be optimally managed as the guideline was transparent and agreed on by all stakeholders. Following more formal initial education for all staff, we soon transitioned to ad-hoc/bedside education for new nursing staff. The intervention was regularly ‘huddled’ at changeover of shifts to ensure ongoing awareness. PDSA cycle 3 (2022): once the postoperative team had completed the education intervention, a bundle of haemostasis interventions (see ) based on the Papworth haemostasis checklist was introduced intraoperatively at ‘chest open’, ‘prior to sternal closure’ and ‘prior to ICU’ alongside additional theatre interventions such as ‘consultant presence for sternal closure’. Compliance with all steps from all team members was not 100% (as monitored by anaesthesia staff who were key to overseeing these interventions in theatre) but through fortnightly reinforcement and with data showing reduced blood usage presented at the CTS taskforce meetings, compliance with most of these interventions from most staff was observed. We did find that blood products were on occasion wasted by teams thinking they would be required on ICU but often not then given or given when not indicated. Practice changed to encourage cardiothoracic surgical and anaesthesia teams to return all unused blood products to blood bank prior to ICU transfer. Interdisciplinary education including awareness of the timely review of coagulopathy and attentive practice on ICU helped to overcome this barrier. We were concerned that despite the CTS taskforce that practice would revert to preintervention baseline, and so chose to conduct a second postintervention data collection a year following the intervention to monitor compliance. Audit of the haemostasis bundle as a process measure was not initially implemented but considered for future development of the project and we would recommend this for other centres implementing this change.
Raw data regarding blood product administration in the periods preintervention (A), postintervention (B) and 1-year following intervention (C) were collected and displayed per 100 patients and by blood product in addition to a cost calculation per blood product per intervention period in AU$ (see ). Measurement was carried out in the same manner for all periods as outlined in the Measurement section. These data show a reduction in blood products administered, which is sustained over the postintervention and 1-year following intervention periods. Red cell, cryoprecipitate and FFP usage were particularly reduced with 57%, 47% and 72% reductions, respectively, following intervention and similar results were maintained on repeat audit 1-year postintervention, with on average 84 fewer blood products used per 100 patients. The difference in cost per 100 patients in the 1-year postintervention period was $36 928 (a 59% reduction). Run charts demonstrate a significant reduction in all blood product usage that persists 1-year postinitial intervention as shown in and displayed by blood product type in the supplementary figures . This significant difference applies when products are analysed individually, however, it is notable that use of platelets was not reduced as significantly as other products using this intervention, as shown in . Return to theatre rates were 3% and 4% in the postintervention period (from 7% preintervention). Blood products wasted in the preintervention phase totalled three AD of cryoprecipitate and in the postintervention and washout phases totalled one AD of cryoprecipitate, and no blood products, respectively. Average ICU length of stay was noted to be higher in the postintervention period (77 vs 68 hours), however confounded by access blocks discharging ‘ward ready’ patients from ICU during this time.
Several changes were made to this project during its implementation based on learning, which we would like to share with other units aiming to establish a similar initiative. First, staff turnover represents a significant challenge, threatening compliance with the interventions due to differences in surgical or anaesthetic technique, intensive care medical or nursing preferences. This can cause variation in results due to differences in transfusion thresholds and clinical preferences and can threaten the sustainability of the project. Staff turnover was exacerbated by the COVID-19 pandemic and is predicted to be an ongoing issue in healthcare. We have learnt that it is essential to keep emphasising the reasons behind the intervention and presenting data showing improvement (audit and feedback) to combat this. It is also helpful to identify practice that deviates from the initiative and explore reasons for this. We engaged with our ‘blood link’ nursing staff who championed these interventions as part of their blood safety and minimisation role and provided continuity during periods of staff changeover. Second, it is relatively easy to monitor blood product usage, but more difficult to monitor bleeding from a systems perspective. We suggest consideration for how to record and obtain data regarding drain outputs when ICU and theatre systems are being reviewed/designed to allow for this essential data to be more accurately tracked. We also found it difficult to monitor compliance with some aspects of the intervention such as surgical practice and learnt that it is essential to have ‘champions’ from various specialties (eg, anaesthesia) that can feed back when practice begins to regress, as it inevitably will without ongoing reinforcement/culture change. In addition, use of platelets was not reduced to the same extent as other blood products. We theorise that given our protocol incorporating thromboelastography does not inform of platelet function, and use of platelets is often an initial therapy in bleeding postcardiac bypass might account for this difference. We would also in future collect further data points to increase the accuracy of our run chart data. Finally, we did not have sufficient data to review the effectiveness of our first intervention (highlighting preoperative patients at high risk) and this is something we are currently working towards as a separate project. However, we were buoyed by the practice change that this initiative was able to generate, including changing operative and ICU management. Buy-in from senior surgical leaders was essential here to model behaviour, change theatre practice and emphasise the importance of the haemostasis time out. Senior presence and involvement at sternal closure was a significant cultural shift but one that we have been able to maintain through constant emphasis on the positive results of the project.
The implementation of a multispecialty, perioperative care model incorporating the Papworth haemostasis checklist represents a significant achievement in reducing blood loss for our patients undergoing cardiothoracic surgery. This healthcare initiative, driven by the urgent need to address the increased use of blood products perioperatively, emphasises the necessity of a collaborative approach towards healthcare improvement in complex healthcare systems. The establishment of a multidisciplinary task force and programme of education, and the adoption of the Papworth haemostasis checklist were at the core of this strategy, emphasising preoperative identification of high-risk patients, meticulous and checklist-guided intraoperative haemostasis management and focused postoperative care with specific triggers for acting on bleeding and coagulopathy. These measures, spanning the perioperative period, have shown a reduction blood product usage, thereby not only demonstrating improved patient outcomes but also yielding cost savings for the healthcare system. By creating an inclusive culture that values the input and collaboration of various specialties via our ‘CTS taskforce’, the initiative has overcome traditional silos of care, enhancing the quality of perioperative care for patients undergoing cardiothoracic surgery. The lessons learnt from the implementation of this initiative, including the challenges of staff turnover and the importance of continuous reinforcement of practice changes, careful considerations of methods of collecting clinical data for ongoing review and methods to effect change in theatres, provide invaluable insights for other institutions aiming to implement similar interventions. The maintenance of this initiative, supported by ongoing education, the identification of specialty ‘champions’ and the constant review of practice against established benchmarks, ensures its sustainability.
10.1136/bmjoq-2024-002911 online supplemental file 1
|
Healthcare providers' expected barriers and facilitators to the implementation of person‐centered long‐term follow‐up care for childhood cancer survivors: A | 54a0e993-ea61-429a-b421-a56444e1c45a | 11497108 | Patient-Centered Care[mh] | INTRODUCTION The number of childhood cancer survivors (CCSs) is increasing. Currently, the estimated population of CCSs in Europe is approximately 500.000. CCSs face a high risk of developing adverse late health effects due to their cancer history and treatment. These late effects are heterogeneous, occurring on the physical, psychological, and social level , , , , , , , , , , , , and lead to higher morbidity and mortality rates compared to age and sex‐matched controls. , , , , , , The quality of life of CCSs is often affected by late effects, , , , , emphasizing the necessity for long‐term follow‐up care (LTFU) to improve CCSs' health and quality of life. , , Due to the heterogeneity in incidence, type, and severity of late effects, a person‐centered multidisciplinary care model is necessary to guide the organization of LTFU care for CCSs. High‐quality LTFU care is based on evidence‐based guidelines for screening and surveillance of late health effects after cancer treatment and person‐centered care. , , , , , Despite the available literature on evidence‐based (models of) LTFU care, sustainable implementation remains a challenge. , , , , The majority of the European CCSs still has limited access to high‐quality LTFU care. To enhance implementation of LTFU care for CCSs in Europe, the PanCareFollowUp (PCFU) consortium, established in 2018, developed the PCFU Care intervention based on a Dutch LTFU care model. , The overall aim of the intervention is to empower childhood cancer survivors across Europe and to improve their health and quality of life by providing person‐centered survivorship care. The PCFU Care intervention will be evaluated through a prospective cohort study conducted at four pediatric cancer‐focused LTFU care clinics, each representing different healthcare systems with varying levels of pre‐existing survivorship care implementation. , However, most innovations do not implement themselves. Tailored implementation strategies have the potential to improve implementation efforts. Identifying barriers and facilitators is a critical first step in developing an effective implementation strategy. Existing reviews on barriers and facilitators for implementing LTFU care for cancer survivors mainly focused on adult cancer survivors and specific cancer types. , Barriers and facilitators are insufficiently studied in the context of establishing LTFU care for the heterogenous population of CCSs. Therefore, we performed a pre‐implementation study aiming to explore expected barriers and facilitators for the implementation of the PCFU Care intervention among health care providers (HCP) involved in LTFU care for CCSs in four European clinics.
METHODS 2.1 Study design and setting A qualitative study was performed using semi‐structured focus groups with HCPs to explore potential barriers and facilitators for the implementation of the PCFU Care intervention. This study followed the consolidated criteria for reporting qualitative research (COREQ checklist) and adhered to local medical ethical standards of the participating centers. As part of the European‐wide PCFU project (Horizon 2020 grant), this study included four European LTFU care clinics for CCSs, located in Belgium, the Czech Republic, Sweden and Italy. 2.2 PCFU Care intervention The PCFU Care intervention is based on international guidelines for surveillance on late effects and person‐centered care. The organizational structure for person‐centered is based on the pillars of Eckman et al. and consist of three phases; initiating, integrating and safeguarding a partnership between patients and HCPs. The first phase of the PCFU Care Intervention, involving the initiation of a partnership between CCSs and HCPs, takes place before the clinic visit. During this phase, both CCSs and HCPs prepare the clinic visit by completing a questionnaire (the survivor questionnaire) and a treatment summary, respectively. The survivor questionnaire is web‐based and gathers information about the CCSs' health, well‐being, medication use, medical and family history, lifestyle, social situation, healthcare needs, and preferences for care with their HCP. The second phase, concerning integration of this established partnership between CCSs and HCPs, involves discussing the CCSs' health and follow‐up care based on shared‐decision making, during the clinic visit. Lastly, this partnership is safeguarded by a follow‐up call during which the results of diagnostic tests and recommendations for further follow‐up care are discussed. These results and recommendations are summarized in a survivorship care plan. Figure shows the important steps within these three phases. The development and features of the PCFU Care intervention are described elsewhere. 2.3 Study population A purposive sampling strategy was applied to recruit participants for this study. At each of the four centers where the PCFU Care intervention will be tested for feasibility, we aimed to organize one focus group with a minimum of five HCPs per group. HCPs from the participating clinics involved in LTFU care were invited by local representatives of the PCFU project. HCPs that were willing to participate registered via e‐mail. 2.4 Data collection On‐site focus groups were conducted in Belgium, the Czech Republic, and Italy between September 2019 and November 2019. Due to pragmatic reasons, a video conference application was used for Sweden. Focus groups in Sweden and the Czech Republic were conducted in English and focus groups in Belgium and Italy were conducted in their native language. Two independent experts in qualitative research and implementation research conducted the focus groups, with a note‐taker present. Prior to the focus groups, participants were informed about the study and had the opportunity to ask questions. Subsequently, participants signed informed consent and completed a demographic questionnaire containing background information such as sex, age, and profession. The participants did not receive any incentives for participation. To identify potential barriers and facilitators for the implementation of the PCFU Care intervention, a semi‐structured interview guide was developed based on the theoretical framework of Grol and Wensing. This framework (Table ) describes barriers to and incentives for change, that can influence the implementation of interventions in the medical field, at six levels of healthcare (innovation, individual professional, patient, social context, organizational context, economic and political context). The interview guide incorporated open‐ended questions on (1) current follow‐up care; (2) differences between current care and care according to the PCFU Care intervention; (3) HCPs' opinion of the PCFU Care intervention; (4) expected barriers and facilitators for implementation of the PCFU Care intervention in general and according to the six levels of Grol and Wensing and (5) needs for successful implementation of the PCFU Care intervention. 2.5 Data analysis Data from focus group interviews were audio‐recorded, anonymized, transcribed verbatim, and the Italian focus group was translated into English. Three researchers coded each transcript independently. Discrepancies were discussed until a consensus was reached. A thematic analysis was performed using Atlas.ti 22.0.11 for Windows. The analysis consisted of an inductive approach followed by a deductive approach. The inductive approach started with open coding of transcripts on a sentence level. Subsequently, axial coding was used to cluster open codes into categories. The emerged categories were deductively mapped on the levels and domains within the theoretical framework of Grol and Wensing. New categories were added to the framework. Representative quotes were selected from the transcripts.
Study design and setting A qualitative study was performed using semi‐structured focus groups with HCPs to explore potential barriers and facilitators for the implementation of the PCFU Care intervention. This study followed the consolidated criteria for reporting qualitative research (COREQ checklist) and adhered to local medical ethical standards of the participating centers. As part of the European‐wide PCFU project (Horizon 2020 grant), this study included four European LTFU care clinics for CCSs, located in Belgium, the Czech Republic, Sweden and Italy.
PCFU Care intervention The PCFU Care intervention is based on international guidelines for surveillance on late effects and person‐centered care. The organizational structure for person‐centered is based on the pillars of Eckman et al. and consist of three phases; initiating, integrating and safeguarding a partnership between patients and HCPs. The first phase of the PCFU Care Intervention, involving the initiation of a partnership between CCSs and HCPs, takes place before the clinic visit. During this phase, both CCSs and HCPs prepare the clinic visit by completing a questionnaire (the survivor questionnaire) and a treatment summary, respectively. The survivor questionnaire is web‐based and gathers information about the CCSs' health, well‐being, medication use, medical and family history, lifestyle, social situation, healthcare needs, and preferences for care with their HCP. The second phase, concerning integration of this established partnership between CCSs and HCPs, involves discussing the CCSs' health and follow‐up care based on shared‐decision making, during the clinic visit. Lastly, this partnership is safeguarded by a follow‐up call during which the results of diagnostic tests and recommendations for further follow‐up care are discussed. These results and recommendations are summarized in a survivorship care plan. Figure shows the important steps within these three phases. The development and features of the PCFU Care intervention are described elsewhere.
Study population A purposive sampling strategy was applied to recruit participants for this study. At each of the four centers where the PCFU Care intervention will be tested for feasibility, we aimed to organize one focus group with a minimum of five HCPs per group. HCPs from the participating clinics involved in LTFU care were invited by local representatives of the PCFU project. HCPs that were willing to participate registered via e‐mail.
Data collection On‐site focus groups were conducted in Belgium, the Czech Republic, and Italy between September 2019 and November 2019. Due to pragmatic reasons, a video conference application was used for Sweden. Focus groups in Sweden and the Czech Republic were conducted in English and focus groups in Belgium and Italy were conducted in their native language. Two independent experts in qualitative research and implementation research conducted the focus groups, with a note‐taker present. Prior to the focus groups, participants were informed about the study and had the opportunity to ask questions. Subsequently, participants signed informed consent and completed a demographic questionnaire containing background information such as sex, age, and profession. The participants did not receive any incentives for participation. To identify potential barriers and facilitators for the implementation of the PCFU Care intervention, a semi‐structured interview guide was developed based on the theoretical framework of Grol and Wensing. This framework (Table ) describes barriers to and incentives for change, that can influence the implementation of interventions in the medical field, at six levels of healthcare (innovation, individual professional, patient, social context, organizational context, economic and political context). The interview guide incorporated open‐ended questions on (1) current follow‐up care; (2) differences between current care and care according to the PCFU Care intervention; (3) HCPs' opinion of the PCFU Care intervention; (4) expected barriers and facilitators for implementation of the PCFU Care intervention in general and according to the six levels of Grol and Wensing and (5) needs for successful implementation of the PCFU Care intervention.
Data analysis Data from focus group interviews were audio‐recorded, anonymized, transcribed verbatim, and the Italian focus group was translated into English. Three researchers coded each transcript independently. Discrepancies were discussed until a consensus was reached. A thematic analysis was performed using Atlas.ti 22.0.11 for Windows. The analysis consisted of an inductive approach followed by a deductive approach. The inductive approach started with open coding of transcripts on a sentence level. Subsequently, axial coding was used to cluster open codes into categories. The emerged categories were deductively mapped on the levels and domains within the theoretical framework of Grol and Wensing. New categories were added to the framework. Representative quotes were selected from the transcripts.
RESULTS 3.1 Demographics Thirty HCPs participated in four focus groups at the LTFU care clinics for CCSs. Table provides the participants' demographics. HCPs had a mean age of 51 years, 67% were female and they had 24 years of working experience on average. Group sizes within the four focus groups ranged from five to 12 participants. 3.2 Barriers and facilitators for implementation of the PCFU Care intervention Tables and present the identified barriers and facilitators according to the six levels of Grol and Wensing. Most barriers and facilitators were identified on the organizational level. The results section elaborates on barriers and facilitators that were mentioned during at least two of the four focus group interviews. These are in bold within Tables and . 3.3 Innovation level: PCFU Care intervention 3.3.1 Barriers HCPs questioned the feasibility of the PCFU Care intervention for survivors with long travel distances to the LTFU clinic. Additionally, HCPs expressed uncertainty regarding the attractiveness of the survivor questionnaire, which may be impersonal, insufficient and demanding for some survivors. That it is a minimum, the questionnaire is still a blank sheet of paper with many questions that cannot be discussed in some way, so for me this is a barrier, the questionnaire. I cannot specifically foresee this questionnaire, I fear that, compared to my experience with patients, the questionnaire is a bit impersonal, and therefore a barrier. However, the questionnaire was also viewed as an attractive tool when well‐supported by a clinic visit and a good survivor‐HCP relation. 3.3.2 Facilitators HCPs mentioned several advantages in practice regarding the PCFU Care intervention. The intervention provides knowledge on LTFU care including evidence‐based guidelines, offers a consistent structure for addressing late effects in CCSs, and includes CCSs that are lost to follow up. Additionally, the survivor questionnaire offers practical advantages, such as aiding CCSs in preparing clinic visits and encouraging the discussion of important topics during those visits. This [the survivor questionnaire] is to kick off the discussion […] Because if we can focus on what they have marked as “I'm very concerned” I think we meet them in the correct arena. We can start with other things, but I think by doing this we have a greater chance of hitting what they really believe is important. And we really give them a chance to think before they come to the visit, to think over their situations. 3.4 Professional level 3.4.1 Barriers Participants mentioned that HCPs outside the LTFU care team may lack knowledge and skills regarding LTFU care. Particularly, HCPs expected that general practitioners (GPs) lack knowledge, which might lead to ineffective referrals and follow‐up care for CCSs. Additionally, lack of training for HCPs outside the LTFU team would potentially lead to an underestimation of CCSs risks. The training for specialists on the territory that in my opinion is absolutely lacking with particular regard to the adult world, because many are still linked to the concept of being cured and not long surviving, therefore with an underestimation of the risks of patients that in my opinion is still in place. 3.4.2 Facilitators Adequate knowledge and skills regarding LTFU care among important stakeholders, including GPs, local care facilities, and specialists from different disciplines were expected to facilitate LTFU care. In addition, participants mentioned the importance of educated hospital staff for the PCFU intervention. A team of trained staff would increase the exchange of knowledge and improves cooperation with HCPs within and outside the hospital. Another expected facilitator was the HCPs' positive attitude towards the PCFU Care intervention. The survivor questionnaire was seen as a potentially time saving tool. HCPs had high expectations and considered the intervention as a valuable innovation. They were intrinsically motivated and viewed caring for CCSs as a moral obligation. Additionally, HCPs were motivated to convince stakeholders to prioritize LTFU care, such as convincing the hospital management to allocate resources and encouraging GPs to refer survivors to the LTFU clinic. It's a moral obligation if you treat people when they're children and adolescents that you take care of them when they're adults. 3.5 Patient level 3.5.1 Barriers HCPs mentioned that survivors who are dealing with complexities, such as insufficient reading skills or cognitive impairment, have difficulties or are incapable to complete the survivor questionnaire. With brain tumor survivors, you have to pay attention. There are a couple that cannot fill in the [survivor questionnaire]. Half of them cannot fill in the questionnaire. Besides, HCPs encountered situations where survivors did not show up or canceled their clinic visit or were unwilling to attend. Furthermore, HCPs mentioned a lack of trust among survivors towards GPs and local care facilities, which can hinder efficient organization of LTFU care. It becomes very difficult, also because, let's face it, the trust of these patients in local centers is close to zero, especially with regard to their pathology and their previous tumor disease, etc., they don't trust anymore. So we have a lot of work to do from a cultural point of view, not only with doctors, but also with patients, and that would be a lot… 3.5.2 Facilitators Participants considered awareness of the importance of LTFU care among CCS as a facilitator for the adoption of the PCFU Care intervention. Furthermore, HCPs observed that survivors are generally motivated to respond to questionnaires. The vast majority have filled this [the survivor questionnaire] in and bring it with them. Providing additional support for survivors who have been lost to follow‐up in LTFU care or for those who may find the questionnaire demanding, was seen as a facilitator for their compliance with the PCFU Care intervention. 3.6 Social context level 3.6.1 Barriers HCPs expressed concerns about the anticipated lack of collaboration with psychosocial care facilities. The connection with the territorial facilities on these mental and psychosocial health aspects is absolutely null, because these are particular patients who sometimes do not have pure psychiatric disorders, but have reactive syndromes, rather than from employment, social, economic, sentimental point of view, they are behind, and no one takes charge of these needs. This means that survivors' psychosocial needs may not be adequately addressed, despite the high demand for psychosocial care among the survivor population. Another potential barrier mentioned by HCPs was the uncertainty whether survivors who face difficulties with completing the survivor questionnaire or attending the LTFU clinic, have the opportunity to receive environmental/family support to assist them. 3.6.2 Facilitators HCPs expected that collaborations with various stakeholders in the context of LTFU care would facilitate the organization of LTFU care. This collaborative effort would involve cooperation with GPs, local care facilities, HCPs from different disciplines, psychosocial care facilities and the (inter)national network regarding LTFU care. These collaborations would facilitate the establishment of a care pathway for survivors, effective referrals and communication across healthcare disciplines, appropriately sharing of medical information, addressing psychosocial needs, and raising awareness about LTFU care. There is plenty of collaboration from all the specialists, we are organised for the most serious complications, and we have good cooperation in such different fields even with adult experts. Having such prominent collaborators like cardiologists here is another facilitator. Because we need to work on different levels to have internal medicine people who will focus on […], for example cardiovascular risk. That's something we have the common goals, like oncologists and cardiologists because we foresee some troubles in our survivors being 50 years old. 3.7 Organizational level 3.7.1 Barriers HCPs mentioned a lack of time and staff for various components of the PCFU Care intervention. Specifically, time limitations were expected for the survivor questionnaire procedures, such as processing the questionnaire results before the clinic visits. Moreover, the treatment summary was seen as a time‐consuming process. HCPs also indicated the challenge to both treat acute cancer patients and take care of survivors in terms of available capacity. Due to limited capacity, acute care is often prioritized over LTFU care. In the future, an increasing number of survivors will be seen at the LTFU clinic. The lack of staff, resources, and care facilities were major barriers for sustainable implementation. HCPs mentioned that the lack of available medical data and efficient Information and Communication (ICT) support would hinder the effective exchange of medical information for the treatment summary. We have different types of patient records, so, in the most of our area we can read the results, it is just one same system, but then there are also three more that are totally different and don't communicate. So, you rely on papers and papers being scanned and so. That is how the systems are, there are different healthcare providers who don't really speak electronically to each other. So if I would point out a possible barrier then it is still the communication with the three other regions which work with different communications systems and send you the patient data if they send it to you at all via paper or via post mail. Besides, HCPs would face difficulties with organizing multiple examinations on the same day. Another barrier was the lack of available psychosocial support and the absence of a structured psychosocial care pathway to adequately address survivors' psychosocial needs. We prevent the second tumor, we make them responsible for their project of care and life, but then instead all the psychosocial aspects that we identify, a treatment has not been thought through. On the field there are no structures to welcome them, there are no paths, there is no specific training, because either they are placed in the melting pot of adult psychiatric patients, and there is no response to their needs, or the problem is underestimated, and then they are left to themselves. 3.7.2 Facilitators HCPs identified specific professional roles that will facilitate the implementation of the PCFU Care intervention, as outlined in Table under organizational level within the subdomain ‘capacity.’ The nurse plays an important role in the guidance of survivors through the PCFU Care intervention. When CCSs receive guidance, HCPs expected that questionnaire responses will improve. Data‐managers and Information Technology (IT) experts would facilitate the extraction of data for treatment summaries and the development of high‐quality IT‐systems that are shareable between different healthcare settings to exchange medical information. Psychosocial roles, including psychologists, social workers, and occupational physicians were seen as important to guide CCSs from a psychosocial perspective. Having staff continuously available would facilitate sustainable implementation. We're recently well‐staffed for doctors, I would say. There are four at least at the department of oncology and two in the department of pediatric oncology. We have excellent secretaries that support us in such a good way. We have since two, three years now [an occupational therapist]. I think it's optimal. If you can have somebody [a nurse] but with the skills. It's not any nurse. It's a nurse with the skills. I mean, you can train somebody, but [our nurse] comes with the knowledge of […] having worked with these patients already at the department of endocrinology on all the late effects associated there. So, we were blessed to have somebody who comes fully equipped from the beginning. Furthermore, the alignment of organizational structures and operating procedures with the elements of the PCFU Care intervention would facilitate its implementation. An organized structure would support efficient management of medical and psychosocial needs for CCSs. 3.8 Economic and political level 3.8.1 Barriers HCPs reported that convincing hospital managers to prioritize LTFU care and allocate adequate resources is a time‐consuming process. Insufficient funding for certain components of the PCFU Care intervention and limitations in financing psychosocial care were identified. Furthermore, the lack of sufficient financial support for survivors might prevent them from attending the LTFU clinic. Some survivors would be willing to return to the LTFU clinic, but their financial situation would hinder them from coming to the LTFU clinic. Survivors with long travel distances face travel and accommodation costs and lost days of work that are usually not reimbursed. We have patients who also come from outside the region, therefore they have to face expenses both in terms of travel and stay in the hospital facilities, therefore high costs that are not always reimbursable, and so also then especially with regard to the group of adults and young adults, even lost days of work, so this type of organization it is a barrier. HCPs mentioned the uncertainty of long‐term financial resources for LTFU care as a barrier to achieve sustainable implementation. 3.8.2 Facilitators HCPs mentioned several facilitators for achieving (long‐term) financial resources for LTFU care. International cooperation within LTFU care for CCSs would strengthen the argumentation for convincing stakeholders at both institutional and national levels to allocate resources for LTFU care. Reporting results and examples of LTFU care would raise awareness and advocates for sustainable resource allocation. Hammering on the authorities to make them understand that this is an issue. And there it's always good to have the numbers. It's always good to have the numbers to be able to say, There are this many people, they are these ages, they have such and such issues, we see them at this and that regularity. And I think the only thing that will affect someone sitting on the money and on the resource is being convinced by numbers, by data. Yeah, and try to calculate the health economics about it. Additionally, financial aid for survivors was expected to facilitate their participation in LTFU care.
Demographics Thirty HCPs participated in four focus groups at the LTFU care clinics for CCSs. Table provides the participants' demographics. HCPs had a mean age of 51 years, 67% were female and they had 24 years of working experience on average. Group sizes within the four focus groups ranged from five to 12 participants.
Barriers and facilitators for implementation of the PCFU Care intervention Tables and present the identified barriers and facilitators according to the six levels of Grol and Wensing. Most barriers and facilitators were identified on the organizational level. The results section elaborates on barriers and facilitators that were mentioned during at least two of the four focus group interviews. These are in bold within Tables and .
Innovation level: PCFU Care intervention 3.3.1 Barriers HCPs questioned the feasibility of the PCFU Care intervention for survivors with long travel distances to the LTFU clinic. Additionally, HCPs expressed uncertainty regarding the attractiveness of the survivor questionnaire, which may be impersonal, insufficient and demanding for some survivors. That it is a minimum, the questionnaire is still a blank sheet of paper with many questions that cannot be discussed in some way, so for me this is a barrier, the questionnaire. I cannot specifically foresee this questionnaire, I fear that, compared to my experience with patients, the questionnaire is a bit impersonal, and therefore a barrier. However, the questionnaire was also viewed as an attractive tool when well‐supported by a clinic visit and a good survivor‐HCP relation. 3.3.2 Facilitators HCPs mentioned several advantages in practice regarding the PCFU Care intervention. The intervention provides knowledge on LTFU care including evidence‐based guidelines, offers a consistent structure for addressing late effects in CCSs, and includes CCSs that are lost to follow up. Additionally, the survivor questionnaire offers practical advantages, such as aiding CCSs in preparing clinic visits and encouraging the discussion of important topics during those visits. This [the survivor questionnaire] is to kick off the discussion […] Because if we can focus on what they have marked as “I'm very concerned” I think we meet them in the correct arena. We can start with other things, but I think by doing this we have a greater chance of hitting what they really believe is important. And we really give them a chance to think before they come to the visit, to think over their situations.
Barriers HCPs questioned the feasibility of the PCFU Care intervention for survivors with long travel distances to the LTFU clinic. Additionally, HCPs expressed uncertainty regarding the attractiveness of the survivor questionnaire, which may be impersonal, insufficient and demanding for some survivors. That it is a minimum, the questionnaire is still a blank sheet of paper with many questions that cannot be discussed in some way, so for me this is a barrier, the questionnaire. I cannot specifically foresee this questionnaire, I fear that, compared to my experience with patients, the questionnaire is a bit impersonal, and therefore a barrier. However, the questionnaire was also viewed as an attractive tool when well‐supported by a clinic visit and a good survivor‐HCP relation.
Facilitators HCPs mentioned several advantages in practice regarding the PCFU Care intervention. The intervention provides knowledge on LTFU care including evidence‐based guidelines, offers a consistent structure for addressing late effects in CCSs, and includes CCSs that are lost to follow up. Additionally, the survivor questionnaire offers practical advantages, such as aiding CCSs in preparing clinic visits and encouraging the discussion of important topics during those visits. This [the survivor questionnaire] is to kick off the discussion […] Because if we can focus on what they have marked as “I'm very concerned” I think we meet them in the correct arena. We can start with other things, but I think by doing this we have a greater chance of hitting what they really believe is important. And we really give them a chance to think before they come to the visit, to think over their situations.
Professional level 3.4.1 Barriers Participants mentioned that HCPs outside the LTFU care team may lack knowledge and skills regarding LTFU care. Particularly, HCPs expected that general practitioners (GPs) lack knowledge, which might lead to ineffective referrals and follow‐up care for CCSs. Additionally, lack of training for HCPs outside the LTFU team would potentially lead to an underestimation of CCSs risks. The training for specialists on the territory that in my opinion is absolutely lacking with particular regard to the adult world, because many are still linked to the concept of being cured and not long surviving, therefore with an underestimation of the risks of patients that in my opinion is still in place. 3.4.2 Facilitators Adequate knowledge and skills regarding LTFU care among important stakeholders, including GPs, local care facilities, and specialists from different disciplines were expected to facilitate LTFU care. In addition, participants mentioned the importance of educated hospital staff for the PCFU intervention. A team of trained staff would increase the exchange of knowledge and improves cooperation with HCPs within and outside the hospital. Another expected facilitator was the HCPs' positive attitude towards the PCFU Care intervention. The survivor questionnaire was seen as a potentially time saving tool. HCPs had high expectations and considered the intervention as a valuable innovation. They were intrinsically motivated and viewed caring for CCSs as a moral obligation. Additionally, HCPs were motivated to convince stakeholders to prioritize LTFU care, such as convincing the hospital management to allocate resources and encouraging GPs to refer survivors to the LTFU clinic. It's a moral obligation if you treat people when they're children and adolescents that you take care of them when they're adults.
Barriers Participants mentioned that HCPs outside the LTFU care team may lack knowledge and skills regarding LTFU care. Particularly, HCPs expected that general practitioners (GPs) lack knowledge, which might lead to ineffective referrals and follow‐up care for CCSs. Additionally, lack of training for HCPs outside the LTFU team would potentially lead to an underestimation of CCSs risks. The training for specialists on the territory that in my opinion is absolutely lacking with particular regard to the adult world, because many are still linked to the concept of being cured and not long surviving, therefore with an underestimation of the risks of patients that in my opinion is still in place.
Facilitators Adequate knowledge and skills regarding LTFU care among important stakeholders, including GPs, local care facilities, and specialists from different disciplines were expected to facilitate LTFU care. In addition, participants mentioned the importance of educated hospital staff for the PCFU intervention. A team of trained staff would increase the exchange of knowledge and improves cooperation with HCPs within and outside the hospital. Another expected facilitator was the HCPs' positive attitude towards the PCFU Care intervention. The survivor questionnaire was seen as a potentially time saving tool. HCPs had high expectations and considered the intervention as a valuable innovation. They were intrinsically motivated and viewed caring for CCSs as a moral obligation. Additionally, HCPs were motivated to convince stakeholders to prioritize LTFU care, such as convincing the hospital management to allocate resources and encouraging GPs to refer survivors to the LTFU clinic. It's a moral obligation if you treat people when they're children and adolescents that you take care of them when they're adults.
Patient level 3.5.1 Barriers HCPs mentioned that survivors who are dealing with complexities, such as insufficient reading skills or cognitive impairment, have difficulties or are incapable to complete the survivor questionnaire. With brain tumor survivors, you have to pay attention. There are a couple that cannot fill in the [survivor questionnaire]. Half of them cannot fill in the questionnaire. Besides, HCPs encountered situations where survivors did not show up or canceled their clinic visit or were unwilling to attend. Furthermore, HCPs mentioned a lack of trust among survivors towards GPs and local care facilities, which can hinder efficient organization of LTFU care. It becomes very difficult, also because, let's face it, the trust of these patients in local centers is close to zero, especially with regard to their pathology and their previous tumor disease, etc., they don't trust anymore. So we have a lot of work to do from a cultural point of view, not only with doctors, but also with patients, and that would be a lot… 3.5.2 Facilitators Participants considered awareness of the importance of LTFU care among CCS as a facilitator for the adoption of the PCFU Care intervention. Furthermore, HCPs observed that survivors are generally motivated to respond to questionnaires. The vast majority have filled this [the survivor questionnaire] in and bring it with them. Providing additional support for survivors who have been lost to follow‐up in LTFU care or for those who may find the questionnaire demanding, was seen as a facilitator for their compliance with the PCFU Care intervention.
Barriers HCPs mentioned that survivors who are dealing with complexities, such as insufficient reading skills or cognitive impairment, have difficulties or are incapable to complete the survivor questionnaire. With brain tumor survivors, you have to pay attention. There are a couple that cannot fill in the [survivor questionnaire]. Half of them cannot fill in the questionnaire. Besides, HCPs encountered situations where survivors did not show up or canceled their clinic visit or were unwilling to attend. Furthermore, HCPs mentioned a lack of trust among survivors towards GPs and local care facilities, which can hinder efficient organization of LTFU care. It becomes very difficult, also because, let's face it, the trust of these patients in local centers is close to zero, especially with regard to their pathology and their previous tumor disease, etc., they don't trust anymore. So we have a lot of work to do from a cultural point of view, not only with doctors, but also with patients, and that would be a lot…
Facilitators Participants considered awareness of the importance of LTFU care among CCS as a facilitator for the adoption of the PCFU Care intervention. Furthermore, HCPs observed that survivors are generally motivated to respond to questionnaires. The vast majority have filled this [the survivor questionnaire] in and bring it with them. Providing additional support for survivors who have been lost to follow‐up in LTFU care or for those who may find the questionnaire demanding, was seen as a facilitator for their compliance with the PCFU Care intervention.
Social context level 3.6.1 Barriers HCPs expressed concerns about the anticipated lack of collaboration with psychosocial care facilities. The connection with the territorial facilities on these mental and psychosocial health aspects is absolutely null, because these are particular patients who sometimes do not have pure psychiatric disorders, but have reactive syndromes, rather than from employment, social, economic, sentimental point of view, they are behind, and no one takes charge of these needs. This means that survivors' psychosocial needs may not be adequately addressed, despite the high demand for psychosocial care among the survivor population. Another potential barrier mentioned by HCPs was the uncertainty whether survivors who face difficulties with completing the survivor questionnaire or attending the LTFU clinic, have the opportunity to receive environmental/family support to assist them. 3.6.2 Facilitators HCPs expected that collaborations with various stakeholders in the context of LTFU care would facilitate the organization of LTFU care. This collaborative effort would involve cooperation with GPs, local care facilities, HCPs from different disciplines, psychosocial care facilities and the (inter)national network regarding LTFU care. These collaborations would facilitate the establishment of a care pathway for survivors, effective referrals and communication across healthcare disciplines, appropriately sharing of medical information, addressing psychosocial needs, and raising awareness about LTFU care. There is plenty of collaboration from all the specialists, we are organised for the most serious complications, and we have good cooperation in such different fields even with adult experts. Having such prominent collaborators like cardiologists here is another facilitator. Because we need to work on different levels to have internal medicine people who will focus on […], for example cardiovascular risk. That's something we have the common goals, like oncologists and cardiologists because we foresee some troubles in our survivors being 50 years old.
Barriers HCPs expressed concerns about the anticipated lack of collaboration with psychosocial care facilities. The connection with the territorial facilities on these mental and psychosocial health aspects is absolutely null, because these are particular patients who sometimes do not have pure psychiatric disorders, but have reactive syndromes, rather than from employment, social, economic, sentimental point of view, they are behind, and no one takes charge of these needs. This means that survivors' psychosocial needs may not be adequately addressed, despite the high demand for psychosocial care among the survivor population. Another potential barrier mentioned by HCPs was the uncertainty whether survivors who face difficulties with completing the survivor questionnaire or attending the LTFU clinic, have the opportunity to receive environmental/family support to assist them.
Facilitators HCPs expected that collaborations with various stakeholders in the context of LTFU care would facilitate the organization of LTFU care. This collaborative effort would involve cooperation with GPs, local care facilities, HCPs from different disciplines, psychosocial care facilities and the (inter)national network regarding LTFU care. These collaborations would facilitate the establishment of a care pathway for survivors, effective referrals and communication across healthcare disciplines, appropriately sharing of medical information, addressing psychosocial needs, and raising awareness about LTFU care. There is plenty of collaboration from all the specialists, we are organised for the most serious complications, and we have good cooperation in such different fields even with adult experts. Having such prominent collaborators like cardiologists here is another facilitator. Because we need to work on different levels to have internal medicine people who will focus on […], for example cardiovascular risk. That's something we have the common goals, like oncologists and cardiologists because we foresee some troubles in our survivors being 50 years old.
Organizational level 3.7.1 Barriers HCPs mentioned a lack of time and staff for various components of the PCFU Care intervention. Specifically, time limitations were expected for the survivor questionnaire procedures, such as processing the questionnaire results before the clinic visits. Moreover, the treatment summary was seen as a time‐consuming process. HCPs also indicated the challenge to both treat acute cancer patients and take care of survivors in terms of available capacity. Due to limited capacity, acute care is often prioritized over LTFU care. In the future, an increasing number of survivors will be seen at the LTFU clinic. The lack of staff, resources, and care facilities were major barriers for sustainable implementation. HCPs mentioned that the lack of available medical data and efficient Information and Communication (ICT) support would hinder the effective exchange of medical information for the treatment summary. We have different types of patient records, so, in the most of our area we can read the results, it is just one same system, but then there are also three more that are totally different and don't communicate. So, you rely on papers and papers being scanned and so. That is how the systems are, there are different healthcare providers who don't really speak electronically to each other. So if I would point out a possible barrier then it is still the communication with the three other regions which work with different communications systems and send you the patient data if they send it to you at all via paper or via post mail. Besides, HCPs would face difficulties with organizing multiple examinations on the same day. Another barrier was the lack of available psychosocial support and the absence of a structured psychosocial care pathway to adequately address survivors' psychosocial needs. We prevent the second tumor, we make them responsible for their project of care and life, but then instead all the psychosocial aspects that we identify, a treatment has not been thought through. On the field there are no structures to welcome them, there are no paths, there is no specific training, because either they are placed in the melting pot of adult psychiatric patients, and there is no response to their needs, or the problem is underestimated, and then they are left to themselves. 3.7.2 Facilitators HCPs identified specific professional roles that will facilitate the implementation of the PCFU Care intervention, as outlined in Table under organizational level within the subdomain ‘capacity.’ The nurse plays an important role in the guidance of survivors through the PCFU Care intervention. When CCSs receive guidance, HCPs expected that questionnaire responses will improve. Data‐managers and Information Technology (IT) experts would facilitate the extraction of data for treatment summaries and the development of high‐quality IT‐systems that are shareable between different healthcare settings to exchange medical information. Psychosocial roles, including psychologists, social workers, and occupational physicians were seen as important to guide CCSs from a psychosocial perspective. Having staff continuously available would facilitate sustainable implementation. We're recently well‐staffed for doctors, I would say. There are four at least at the department of oncology and two in the department of pediatric oncology. We have excellent secretaries that support us in such a good way. We have since two, three years now [an occupational therapist]. I think it's optimal. If you can have somebody [a nurse] but with the skills. It's not any nurse. It's a nurse with the skills. I mean, you can train somebody, but [our nurse] comes with the knowledge of […] having worked with these patients already at the department of endocrinology on all the late effects associated there. So, we were blessed to have somebody who comes fully equipped from the beginning. Furthermore, the alignment of organizational structures and operating procedures with the elements of the PCFU Care intervention would facilitate its implementation. An organized structure would support efficient management of medical and psychosocial needs for CCSs.
Barriers HCPs mentioned a lack of time and staff for various components of the PCFU Care intervention. Specifically, time limitations were expected for the survivor questionnaire procedures, such as processing the questionnaire results before the clinic visits. Moreover, the treatment summary was seen as a time‐consuming process. HCPs also indicated the challenge to both treat acute cancer patients and take care of survivors in terms of available capacity. Due to limited capacity, acute care is often prioritized over LTFU care. In the future, an increasing number of survivors will be seen at the LTFU clinic. The lack of staff, resources, and care facilities were major barriers for sustainable implementation. HCPs mentioned that the lack of available medical data and efficient Information and Communication (ICT) support would hinder the effective exchange of medical information for the treatment summary. We have different types of patient records, so, in the most of our area we can read the results, it is just one same system, but then there are also three more that are totally different and don't communicate. So, you rely on papers and papers being scanned and so. That is how the systems are, there are different healthcare providers who don't really speak electronically to each other. So if I would point out a possible barrier then it is still the communication with the three other regions which work with different communications systems and send you the patient data if they send it to you at all via paper or via post mail. Besides, HCPs would face difficulties with organizing multiple examinations on the same day. Another barrier was the lack of available psychosocial support and the absence of a structured psychosocial care pathway to adequately address survivors' psychosocial needs. We prevent the second tumor, we make them responsible for their project of care and life, but then instead all the psychosocial aspects that we identify, a treatment has not been thought through. On the field there are no structures to welcome them, there are no paths, there is no specific training, because either they are placed in the melting pot of adult psychiatric patients, and there is no response to their needs, or the problem is underestimated, and then they are left to themselves.
Facilitators HCPs identified specific professional roles that will facilitate the implementation of the PCFU Care intervention, as outlined in Table under organizational level within the subdomain ‘capacity.’ The nurse plays an important role in the guidance of survivors through the PCFU Care intervention. When CCSs receive guidance, HCPs expected that questionnaire responses will improve. Data‐managers and Information Technology (IT) experts would facilitate the extraction of data for treatment summaries and the development of high‐quality IT‐systems that are shareable between different healthcare settings to exchange medical information. Psychosocial roles, including psychologists, social workers, and occupational physicians were seen as important to guide CCSs from a psychosocial perspective. Having staff continuously available would facilitate sustainable implementation. We're recently well‐staffed for doctors, I would say. There are four at least at the department of oncology and two in the department of pediatric oncology. We have excellent secretaries that support us in such a good way. We have since two, three years now [an occupational therapist]. I think it's optimal. If you can have somebody [a nurse] but with the skills. It's not any nurse. It's a nurse with the skills. I mean, you can train somebody, but [our nurse] comes with the knowledge of […] having worked with these patients already at the department of endocrinology on all the late effects associated there. So, we were blessed to have somebody who comes fully equipped from the beginning. Furthermore, the alignment of organizational structures and operating procedures with the elements of the PCFU Care intervention would facilitate its implementation. An organized structure would support efficient management of medical and psychosocial needs for CCSs.
Economic and political level 3.8.1 Barriers HCPs reported that convincing hospital managers to prioritize LTFU care and allocate adequate resources is a time‐consuming process. Insufficient funding for certain components of the PCFU Care intervention and limitations in financing psychosocial care were identified. Furthermore, the lack of sufficient financial support for survivors might prevent them from attending the LTFU clinic. Some survivors would be willing to return to the LTFU clinic, but their financial situation would hinder them from coming to the LTFU clinic. Survivors with long travel distances face travel and accommodation costs and lost days of work that are usually not reimbursed. We have patients who also come from outside the region, therefore they have to face expenses both in terms of travel and stay in the hospital facilities, therefore high costs that are not always reimbursable, and so also then especially with regard to the group of adults and young adults, even lost days of work, so this type of organization it is a barrier. HCPs mentioned the uncertainty of long‐term financial resources for LTFU care as a barrier to achieve sustainable implementation. 3.8.2 Facilitators HCPs mentioned several facilitators for achieving (long‐term) financial resources for LTFU care. International cooperation within LTFU care for CCSs would strengthen the argumentation for convincing stakeholders at both institutional and national levels to allocate resources for LTFU care. Reporting results and examples of LTFU care would raise awareness and advocates for sustainable resource allocation. Hammering on the authorities to make them understand that this is an issue. And there it's always good to have the numbers. It's always good to have the numbers to be able to say, There are this many people, they are these ages, they have such and such issues, we see them at this and that regularity. And I think the only thing that will affect someone sitting on the money and on the resource is being convinced by numbers, by data. Yeah, and try to calculate the health economics about it. Additionally, financial aid for survivors was expected to facilitate their participation in LTFU care.
Barriers HCPs reported that convincing hospital managers to prioritize LTFU care and allocate adequate resources is a time‐consuming process. Insufficient funding for certain components of the PCFU Care intervention and limitations in financing psychosocial care were identified. Furthermore, the lack of sufficient financial support for survivors might prevent them from attending the LTFU clinic. Some survivors would be willing to return to the LTFU clinic, but their financial situation would hinder them from coming to the LTFU clinic. Survivors with long travel distances face travel and accommodation costs and lost days of work that are usually not reimbursed. We have patients who also come from outside the region, therefore they have to face expenses both in terms of travel and stay in the hospital facilities, therefore high costs that are not always reimbursable, and so also then especially with regard to the group of adults and young adults, even lost days of work, so this type of organization it is a barrier. HCPs mentioned the uncertainty of long‐term financial resources for LTFU care as a barrier to achieve sustainable implementation.
Facilitators HCPs mentioned several facilitators for achieving (long‐term) financial resources for LTFU care. International cooperation within LTFU care for CCSs would strengthen the argumentation for convincing stakeholders at both institutional and national levels to allocate resources for LTFU care. Reporting results and examples of LTFU care would raise awareness and advocates for sustainable resource allocation. Hammering on the authorities to make them understand that this is an issue. And there it's always good to have the numbers. It's always good to have the numbers to be able to say, There are this many people, they are these ages, they have such and such issues, we see them at this and that regularity. And I think the only thing that will affect someone sitting on the money and on the resource is being convinced by numbers, by data. Yeah, and try to calculate the health economics about it. Additionally, financial aid for survivors was expected to facilitate their participation in LTFU care.
DISCUSSION This study presents the first qualitative pre‐implementation study exploring barriers and facilitators for implementing high‐quality LTFU care for CCSs from the HCPs' perspective in four LTFU care clinics in Europe. Barriers and facilitators were identified within all six levels of the Grol and Wensing framework. Most barriers were identified on the organizational level, including insufficient staff, time, capacity, and psychosocial support. Other main barriers included limited knowledge of late effects among HCPs outside the LTFU care team, inability of some survivors to complete the survivor questionnaire and lack of (long‐term) financial resources. Main facilitators included motivated HCPs and survivors, skilled hospital team, collaborations with important stakeholders like GPs and psychosocial care facilities, utilization of the international collaboration and reporting LTFU care results to convince hospital managers. The potential implementation challenges lack of time, staff, and (ICT) resources have been previously mentioned by HCPs for implementing LTFU care. , , As potential implementation strategies to mitigate these barriers, establishing efficient organizational structures that incorporate collaborative ICT systems (e.g., automatic data generation from electronic medical records, treatment summary databases, web‐based survivor care plans) could be considered. , , , In addition, contracting specialized survivorship nurses, administrative staff, and data managers could be an implementation strategy to alleviate the oncologist's workload. Collaboration with hospital management and health insurers is crucial for resource allocation. Demonstrating results and cost‐effectiveness of LTFU care can support resource allocation at institutional and national levels. Another main study result is the need to enhance knowledge and collaboration with GPs and HCPs from various disciplines and healthcare facilities. This can facilitate effective exchange of medical information between HCPs, appropriate CCSs referrals, and the establishment of suitable care pathways to address CCSs' physical and psychosocial needs. The importance of improving knowledge, communication and collaboration aligns with previous literature, , , , which primarily focused on LTFU shared‐care models. Professional education for HCPs, including GPs, could be an implementation strategy to improve competence in LTFU care. , Our study suggests that survivors might lack trust in GPs and local healthcare facilities regarding LTFU care, potentially lowering compliance. HCP education has the potential to increase survivor's confidence in HCPs and GPs competencies as well. The present study highlights a gap in detecting psychosocial issues among CCSs at the LTFU clinic and the absence of an adequate follow‐up care pathway to manage these issues. It is essential to address this barrier as psychosocial issues are commonly experienced by survivors. , , The importance of addressing CCSs psychosocial needs is underlined by the Institute of Medicine. It is crucial to improve access to and collaboration with psychologists, and social workers along with the establishment of a referral structure for psychosocial care. Our study also showed that some survivors may face limitations in participating in the PCFU Care intervention due to physical, mental, financial, and logistical challenges. Therefore, as an implementation strategy providing guidance, such as aiding survivors with completing the survivor questionnaire, could be considered. Additionally, financial reimbursements as implementation strategies can aid survivors who face financial difficulties or who must cover travel and accommodation costs. However, dedicated funding to address the cost and travel burden for survivors remains a challenge. Online consultations and interventions may be a viable alternative for survivors who cannot easily visit the LTFU clinic. Prior reviews on implementing LTFU care for cancer survivors have predominantly examined GP‐led LTFU care models, shared care models between GP and cancer specialists and oncology nurse‐led LTFU care models, with a focus on adult‐onset cancers. , The strength of our study is that it concentrates on establishing LTFU clinics as care model for the heterogenous CCS population. LTFU clinics have the capability to manage CCSs who require complex care due to elevated risks of serious late effects. , , Another strength of this study lies in its incorporation of insights from diverse European healthcare systems, providing practical and detailed information on important barriers to address and facilitators to use for successful implementation efforts. This diverse overview of barriers and facilitators based on real‐world settings is relevant for other hospitals willing to implement LTFU care for CCSs. Findings can be integrated in implementation strategies to enhance the provision of LTFU care for CCSs in Europe. A limitation is that this study only considers the HCPs perspective. To design a comprehensive implementation strategy, future research should include perspectives from CCSs and their informal caregivers, hospital management, and policy makers. Another limitation is that data saturation may not have been fully reached in the four focus groups. Determining data saturation becomes more challenging when utilizing focus groups. Some barriers and facilitators were saturated across all four focus groups while other factors were more specific to particular clinic sites. These contextual variations should be taken into account when designing a fitted implementation strategy for LTFU care. However, this study aimed to explore barriers and facilitators proposed by a varied group of HCPs from different European healthcare systems, with the purpose of gathering a diverse overview of barriers and facilitators. This study included 30 participants, among whom the most important HCPs involved in LTFU from the four European clinics that are part of the PCFU study. This exploratory design has the advantage of being relatively fast, inexpensive and can be replicated by other centers to identify barriers and facilitators specific to their healthcare setting with minimal resources. When interpreting results, cultural differences may affect expressed content and openness in focus groups. Additionally, two centers used their native language during the focus groups and the other two used the English language, which could have influenced the level of participation. However, centers could choose their preferred language, assuming English proficiency when opting for English. In Italy, the HCPs preferred to conduct the focus group in Italian, which was then translated into English one‐way. Unfortunately, there's a potential risk of losing meaning by not translating the English version back into Italian.
CONCLUSION This study identified expected barriers and facilitators from the HCPs' perspective for successful implementation of high‐quality LTFU care for CCSs using the PCFU Care intervention. Our findings showed that specific attention should be given to knowledge, capacity, and financial issues, along with addressing psychosocial issues of survivors. The results support clinical staff in providing optimal LTFU care and offer practical guidance for integrating the PCFU Care intervention.
Dionne Breij: Conceptualization (equal); data curation (lead); formal analysis (lead); investigation (equal); methodology (equal); project administration (equal); writing – original draft (lead). Lars Hjorth: Funding acquisition (equal); resources (equal); writing – review and editing (equal). Eline Bouwman: Conceptualization (equal); data curation (equal); methodology (equal); writing – review and editing (equal). Iris Walraven: Supervision (lead); writing – review and editing (lead). Tomas Kepak: Funding acquisition (equal); resources (equal); writing – review and editing (equal). Katerina Kepakova: Project administration (equal); resources (equal); writing – review and editing (equal). Riccardo Haupt: Funding acquisition (equal); resources (equal); writing – review and editing (equal). Monica Muraca: Funding acquisition (equal); project administration (equal); resources (equal); writing – review and editing (equal). Irene Göttgens: Conceptualization (equal); data curation (lead); formal analysis (lead); investigation (equal); methodology (equal); project administration (equal); writing – review and editing (equal). Iridi Stollman: Data curation (equal); formal analysis (lead); writing – review and editing (supporting). Jeanette Falck Winther: Funding acquisition (equal); writing – review and editing (equal). Anita Kienesberger: Writing – review and editing (supporting). Hannah Gsell: Writing – review and editing (supporting). Gisela Michel: Writing – review and editing (equal). Nicole Blijlevens: Supervision (supporting); writing – review and editing (supporting). Saskia M. F. Pluijm: Funding acquisition (equal); writing – review and editing (equal). Katharina Roser: Writing – review and editing (equal). Roderick Skinner: Funding acquisition (equal); writing – review and editing (equal). Marleen Renard: Writing – review and editing (supporting). Anne Uyttebroeck: Funding acquisition (equal); resources (equal); writing – review and editing (equal). Cecilia Follin: Project administration (equal); resources (equal); writing – review and editing (supporting). Helena J. H. van der Pal: Funding acquisition (equal); writing – review and editing (equal). Leontien C. M. Kremer: Conceptualization (equal); funding acquisition (lead); methodology (equal); writing – review and editing (equal). Jaqueline Loonen: Conceptualization (lead); funding acquisition (lead); methodology (equal); supervision (lead); validation (equal); writing – review and editing (lead). Rosella Hermens: Conceptualization (lead); funding acquisition (lead); investigation (equal); methodology (lead); project administration (supporting); supervision (lead); validation (equal); writing – review and editing (lead).
“The project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 824982. The material presented and views expressed here are the responsibility of the author(s) only. The EU Commission takes no responsibility for any use made of the information set out.”
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
This study adhered to local (METC) procedures of the participating centers. The full names of the ethics committees were: Ethische Commissie Onderzoek UZ/KU Leuven (S63072). Facultni nemocnice u sv. Anny v Brno, Eticka komise (41 V/2019). According to national legislation and confirmed by the Health directors of the participating institutes, no ethical approval was needed in Lund and Italy.
Informed consent was obtained from all participants.
Permission via Rightslink Elsevier to reuse a figure that is previously published in The European Journal of Cancer: van Kalsbeek RJ, Mulder RL, Haupt R, Muraca M, Hjorth L, Follin C, et al. The PanCareFollowUp Care Intervention: A European harmonized approach to person‐centred guideline‐based survivorship care after childhood, adolescent and young adult cancer. European Journal of Cancer . 2022;162:34–44 Figure . The PanCareFollowUp Care Intervention steps: previsit preparation, clinic visit, and follow‐up call.
|
Comparison of two tDCS protocols on pain and EEG alpha-2 oscillations in women with fibromyalgia | 59e33249-c64b-40ef-92f4-89000424154d | 7609530 | Physiology[mh] | Fibromyalgia (FM) is a clinical condition characterized by the presence of generalized and disabling chronic pain, and may involve symptoms of depression and anxiety . Its diagnosis is based on exclusively clinical criteria, and there are no complementary tests that contribute to its identification. A challenge in relation to FM is the therapeutic possibilities, as the drugs available for the treatment of chronic pain have provided only modest relief for these patients . Thus, the technique of Transcranial Direct Current Stimulation (tDCS) has shown promising results in the treatment of chronic pain in this population . The application of tDCS has been widely studied in other pain syndromes , , . However, the current evidence is still very limited in relation to the ideal treatment protocol, such as frequency and duration of stimulation , , especially in relation to FM. The application of anodic tDCS over M1 has shown positive results in pain levels and that repeated stimulation causes superior results in analgesia in individuals with FM . The effectiveness of tDCS for five days was found by Fagerlund, Hansen and Aslaksen , who found that stimulation was able to promote pain relief without providing serious adverse effects when testing the effect of tDCS in participants with FM in a hospital environment. Mendonca et al. , with a similar protocol, found that the intervention with tDCS provided a reduction in pain and anxiety in individuals with FM. On the other hand, Valle et al. used a 10-day consecutive tDCS protocol, and also achieved improvement in pain intensity in FM and long-term clinical benefits with stimulation in M1. Research capable of comparing different protocols would help to clarify the best parameters and duration of treatment to be used for pain in FM, enabling the consolidation of a therapeutic protocol. In the present study, we compared the two more frequently used protocols for the pain treatment of FM, in which the anodic stimulation is applied to M1 for five and ten consecutive days. An advantage of comparing protocols with different durations is to identify the tDCS protocol that produces satisfactory responses with a smaller number of sessions, consequently, minimizing the occurrence of adverse effects and reducing the total time and costs of the therapeutic intervention. In addition to pain measures, we analyzed the cortical electrical activity associated to the tDCS stimulation, which had not yet been investigated with these protocols in this population. Studies providing an electrophysiological measure of response to the tDCS treatment may help to provide another data in addition to the behavioral one. In this sense, the electroencephalogram (EEG) stands out as a tool for monitoring response to treatment , . Although there is disagreement in the literature, in general, the analysis of chronic pain in FM through the EEG shows amplitude of the altered alpha wave, being more studied in the frontal, parietal and occipital regions , . Alpha is commonly related to the state of relaxation . Villafaina et al. observed that individuals with FM showed a decrease in the alpha 2 power range in the resting condition, suggesting that chronic pain in these patients modulates this frequency range throughout time. For this reason, the alpha 2 frequency band was the subject of our study. Theoretically, the manipulation of the amplitude of the frequency bands could be associated with behavioral changes . Therefore, the scarcity of studies that offer behavioral and physiological measures of response to treatment with tDCS underscore the importance of this study. In the present research, we compared two tDCS protocols for pain and their electroencephalographic correlations at rest in women with FM in the frontal, parietal and occipital regions. Our general hypothesis was that different tDCS protocols would lead to a decreased in pain, and differentially modulate the cortical electrical activity in women with fibromyalgia.
The average age of the participants was 44.81 years old (SD = 8.8) and the average level of pain reported was 6.66 (SD = 1.70) in the VAS. The mean time of diagnosis of FM was 6.60 years (SD = 5.38) and there was no difference between this measure in the groups [F (2, 28) = 2.84; p = 0.075]. In addition, 77.40% reported medication use, 25.8% practiced physical activity and 32.30% did psychotherapy. These measures also did not differ between groups [drugs (χ 2 (2) = 0.26, p = 0.878); physical activity (χ 2 (2) = 1.67, p = 0.404); psychotherapy (χ 2 (2) = 0.59, p = 0.747)]. The participants did not report any complaints of adverse effects after the stimulation sessions, nor did they report any neurological complaints or comorbidities, assessed by the CIRS. Also, the groups did not differ from each other before treatment in terms of pain level [F (2,28) = 2.08 p = 0.143], anxiety levels [F (2,28) = 0.57 p = 0.575], depression [F (2, 28) = 0.36 p = 0.699] and cognitive status [F (1, 28) = 2.91, p = 0.071]. There was an effect of the time factor on the pain variable [F (1, 28) = 8.02; p = 0.008; η 2 = 0.223], with a reduction in pain levels in general after treatment. However, there were no statistically significant differences for the group factor [F (2, 28) = 0.24; p = 0.792; η 2 = 0.017], and no interaction between time and group factors [F (2, 28) = 0.90; p = 0.417; η 2 = 0.061]. Regarding the electrophysiological variables, there was an effect of the interaction between time and group on the frontal alpha 2 [F (1 28) = 3.62; p = 0.040; η 2 = 0.261] and parietal alpha 2 [F (1, 28) = 4.95; p = 0.014; η 2 = 0.261] variables, and the participants who received stimulation for five consecutive days showed a significant reduction in the mean alpha spectral power post-intervention in the frontal ( p = 0.039; d = 0.384 ) and parietal ( p = 0.021; d = 0.520) regions. For the alpha 2 in the occipital region, there was no interaction effect [F (1, 28) = 2,452; p = 0.104; η 2 = 0.149]. Means groups for electrophysiological and pain variables are shown in Table .
In the present study, we compared two tDCS protocols for pain and their associated electroencephalographic changes in the frontal, parietal and occipital regions in women with FM. We found no difference between the tDCS protocols applied to M1 for 5 days and 10 days on the reported pain. These results are in line with other studies that found no statistically significant difference between active tDCS and sham on pain levels in FM , but disagree with the findings of Fagerlund et al. and Valle et al. , who reported pain symptom alleviation after the tDCS protocols applied to M1 for 5 days and 10 days, respectively, when compared to placebo. It is important to highlight that the optimal parameters of the tDCS administration still need to be defined . Considering that studies vary in different aspects, information about the optimal parameters cannot be easily obtained from a comparison between these distinct studies . Previous findings demonstrate limits of the tDCS technique in inducing changes in the cortical excitability – . Therefore, it is possible that our results indicate a ceiling effect of the cortical changes, so that it would be necessary to test longer protocols to overcome an eventual plateau of brain responses induced by a 5 to 10-days protocol. In the present study, we found that tDCS modulated cortical electrical activity, with a decrease in alpha 2 spectral power in the parietal and frontal regions after the treatment with the 5-days stimulation protocol. Similarly, Spitoni et al. found changes in alpha activity in the frontal and parietal regions, but not in the occipital region. The increased alpha amplitude is commonly associated with cortical deactivation and inhibition . Anodic tDCS is commonly associated with increased cortical excitability, so a decline in alpha amplitude is expected after anodic stimulation as observed in our study for the 5-days stimulation group. In the other two groups, we observed an increase in alpha 2 activity, but below a statistically significant level. Perhaps, this phenomenon corresponds to the effects of tDCS on inhibitory neurons, which could increase the alpha amplitude after stimulation . Considering that tDCS is capable of promoting changes in neuronal excitability , the neuromodulation of the cortical activity may be associated with the decrease in pain. At the same time, the present results indicate a placebo response to the application of the sham protocol. Other tDCS studies reported similar findings . Placebo analgesic effects can be brought about by the expectation of symptom improvement – . Moreover, a recent meta-analysis of randomized controlled trials showed that the placebo treatment is clinically effective in reducing pain in FM, and stronger in people with greater pain intensity . The magnitude of the placebo effect in FM may also be influenced by other factors as well as age, gender, disease duration, and expected strength of treatment . This influence of expectations in the tDCS outcomes has been reported in other studies , . Here, another possible explanation for the placebo effect is the parallel design of the study. A recent metanalysis of non-invasive brain stimulation work showed a significant effect of the placebo in parallel designs, but not in crossover studies . Therefore, further studies may compare tDCS protocols using a parallel design, with the inclusion of control without treatment (waiting list). Likewise, the brain's ability to modify activity in some of its specific structures in response to analgesia due to the placebo effect , may justify the fact that there was analgesia associated with an increase in mean potency alpha2 spectral in the parietal cortex in the sham group. The results reported here must be considered in light of some limitations. First, the high variability in the diagnosis time, and, consequently, the time living with pain may have influenced the results. To minimize this bias, it was ensured that the groups would be homogeneous in relation to the time of diagnosis. Second, the electric field distribution of the tDCS is influenced by the anatomical distribution of the head tissue. Hence, there is only a limited control on the resulting current distribution in the brain . Inter-individual variations of the generated fields are likely a key factor that contributes to the observed physiological and behavioral variability, which may explain the differences between groups with regard to the cortical changes. A computational model may provide information on the distribution of tDCS current in the brain as a function of anatomical factors. However, this method is costly and requires the participants to undergo an MRI, previously to the tDCS application . Nevertheless, future studies may perform the computational modeling of the current in order to control the variability among the individuals with FM. In conclusion, we found that the tDCS protocols, with the anodic stimulation on M1, for five and 10 consecutive days, as well as the sham protocol, produced similar results in the reduction of pain in women with FM. Nevertheless, the two tDCS protocols modulated alpha 2 cortical electrical activity in the frontal and parietal cortex in different ways. The alpha band is normally associated with a relaxed, passive, and defocused attention state, therefore the modulation of alpha 2 may be related to behavioral changes in women with FM . Future studies may analyze the effects of longer-term tDCS protocols on pain and brain activity modulation.
The present study was a longitudinal, randomized, double-blind, placebo-controlled clinical trial, developed for women with fibromyalgia. The project was approved by the Research Ethics Committee of the Health Sciences Center of the Federal University under CAAE: 39796914.5.0000.5188. Written authorization was collected for the participation of each volunteer in the research, through the Informed Consent Form. Participants' autonomy and anonymity was guaranteed, ensuring their privacy regarding confidential data, as regulated by Resolution 466/2012 of the National Health Council. The ethical principles expressed in the Declaration of Helsinki were respected. Our clinical trial was registered on the Clinical Trials platform on 12/28/2017 and is available for public access on the website clinicaltrials.gov through the protocol NCT03384888. It was also registered on the ReBec platform (ensaiosclinicos.gov.br) with protocol RBR-5XBWJK on 06/24/2020. The criteria of the Consolidated standards for test reports—CONSORT were followed. Sample The sample was not probabilistic, and comprised 31 volunteers, aged between 27 and 58 years old, who met the following inclusion criteria: (1) having a diagnosis of fibromyalgia, according to the criteria of the American College of Rheumatology; (2) having been diagnosed at least three months ago; (3) be female; (4) be in the age group between 25 and 60 years old; and (5) signing the Informed Consent Form. It was excluded women with a score below 24 on the Mini Mental State Examination (MMSE); metal implants located on the head, cochlear implants and cardiac pacemaker; illiterate; pregnant women; history of seizure; and severe depression, with a score greater than 35 in the Beck Depression Inventory (BDI). Participants were randomly assigned to three groups: 11 women in Group 1, with anodic stimulation on the left M1 and cathodic stimulation on the right supraorbital region on five consecutive days; nine women in Group 2, with anodic stimulation on the left M1 and cathodic stimulation on the right supraorbital region on 10 consecutive days (excluding the weekend); and 11 women in Group 3 (sham), with simulated type stimulation, following the protocol of Group 1. All volunteers were reevaluated within seven days after the end of care to ensure the measurement of the effects resulting from the application of the current. Prior to the start of consultations, training was conducted with the examiners to minimize random errors and between researchers. The training ended after the standardization of the process was ensured. The flow of participants in the study is described in Fig. . The evaluations and neuromodulation sessions were carried out in the Neuroscience Laboratory, individually. Randomization and blinding The participants were randomly distributed, by one of the researchers, with block exchange at the rate of 1:1:1 using the online randomization program ( www.random.org ). After randomization, the generated codes were placed in sequential numbered, opaque envelopes and sealed in order to hide the allocation. These envelopes were delivered to the researcher responsible for neurostimulation the day before the start of the sessions. The outcome evaluators and patients were blinded to the type of stimulation applied and the person responsible for neurostimulation blinded to the performance achieved by patients in the evaluations. Friction and adhesion As a friction, it was considered the fault in two sessions or a single fault without replacement. In addition to the insertion of medication for continuous use after the initial evaluation. In order to facilitate the participants' adherence to the study, flexible hours for appointments were organized. It was also allowed to miss a day of attendance, being replaced at the end of the sessions, in addition to making periodic calls in order to maintain contact and avoid evasion from the study. Outcome assessment tools The instruments used for data collection were: the Sociodemographic and Clinical Questionnaire to characterize the sample; the Cumulative Illness Rating Scale (CIRS) , for the analysis of existing comorbidities; the Visual Analogue Scale (VAS) , to check the level of pain at the time of the evaluation; the Mini Mental State Examination (MMSE) , to assess the participants' cognitive status and serve as an exclusion criterion from the study; the Beck Depression Inventory , to exclude participants with severe depression; the Beck Anxiety Inventory (BAI) , to verify that all were homogeneous in terms of anxiety level; and the electroencephalogram, to assess cortical electrical activity. The study phases are shown in Table . Evaluation protocol with electroencephalogram The data collection process with the EEG was made from 32 electrodes placed on the scalp through an adjustable cap, following the EEG International System 10–20, with impedance below 20kΩ . The amplifier used was the ActiChamp, with a sampling rate of 500 Hz. During the collection, the participants sat comfortably in a chair and were instructed to avoid excessive body and eye movements, in addition to relaxing the mandibular musculature and avoiding muscle contractions in the face region, to decrease the presence of artifacts in the records during the acquisition of the data. Data was collected at rest, 6 min with the participant with eyes open and 6 min with eyes closed . The time was divided into 2 min separated by small intervals and repeated three times, ending in 12 min for the acquisition of data . Neuromodulation protocol The consultations with the tDCS were performed in an individual session and the electrodes were placed in C3, which corresponds to the region of the primary motor cortex (M1), according to SI 10–20 of the EEG . The protocol used was 20 min of stimulation per day, the first group was stimulated for five consecutive days, and the second group for two weeks (excluding weekends), totaling 10 sessions. The protocol for sham stimulation was identical to the first group, but the device was turned off 30 s after the start of stimulation, so as not to induce clinical effects. The TCT research equipment was used, with electrodes wrapped in 5 × 7 cm sponges, moistened with saline (NaCl 0.9%). The applied current was 2 mA, the current density being equivalent to 0.05 A / m2. At the end of each session, the participants were asked about the experience of adverse effects, in order to monitor the safety of applying the current. Analysis of electroencephalographic data The power spectra of the frequency bands for the electrodes F3, F4, P3, P4, O1 and O2 were analyzed, each representing a cortical area—left and right frontal, left and right parietal, left and right occipital—respectively , according to the EEG International System 10–20. The analyzes were performed using EEGLAB, a MATLAB toolbox. In the pre-processing, the data filtering was done using the 0.5 Hz high pass and 30 Hz low pass filter. The average of the electrodes was used as a reference, in order to remove possible spatial biases , and, in sequence, the Multiple Artifact Rejection Algorithm—MARA for removing artifacts. Only the data corresponding to the participants with their eyes closed were processed . Statistical analysis Statistical analyses were carried out using the Software Statistical Package for the Social Sciences (SPSS) version 22.0 for Windows. First, descriptive analyses were performed, using measures of central tendency and dispersion, to characterize the sample. For inferential analyses, initially the Shapiro–Wilk test was used, which indicated that the data had a normal distribution. The chi-square test was used to compare groups with respect to the practical variables of physical activity, psychotherapy and medication use. The one-way ANOVA test was performed to verify homogeneity between groups before the start of treatment. For pre- and post-treatment evaluation, the ANOVA factorial statistical test with mixed design was used, referring to the three groups (Group 1, Group 2 and Sham) x two times (pre and post treatment). The level of significance considered was p < 0.05. For peer comparison, the Bonferroni-Sidak post hoc test was used. Finally, we used the effect size calculation from the partial eta squared for each variable within each group, with values of 01, 0.06, and > 0.14 reflecting small, medium, and large effects, respectively and Cohen´s d for comparisons between pairs, with values of 0.20, 0.50, and 0.80 reflecting small, medium, and large effects, respectively .
The sample was not probabilistic, and comprised 31 volunteers, aged between 27 and 58 years old, who met the following inclusion criteria: (1) having a diagnosis of fibromyalgia, according to the criteria of the American College of Rheumatology; (2) having been diagnosed at least three months ago; (3) be female; (4) be in the age group between 25 and 60 years old; and (5) signing the Informed Consent Form. It was excluded women with a score below 24 on the Mini Mental State Examination (MMSE); metal implants located on the head, cochlear implants and cardiac pacemaker; illiterate; pregnant women; history of seizure; and severe depression, with a score greater than 35 in the Beck Depression Inventory (BDI). Participants were randomly assigned to three groups: 11 women in Group 1, with anodic stimulation on the left M1 and cathodic stimulation on the right supraorbital region on five consecutive days; nine women in Group 2, with anodic stimulation on the left M1 and cathodic stimulation on the right supraorbital region on 10 consecutive days (excluding the weekend); and 11 women in Group 3 (sham), with simulated type stimulation, following the protocol of Group 1. All volunteers were reevaluated within seven days after the end of care to ensure the measurement of the effects resulting from the application of the current. Prior to the start of consultations, training was conducted with the examiners to minimize random errors and between researchers. The training ended after the standardization of the process was ensured. The flow of participants in the study is described in Fig. . The evaluations and neuromodulation sessions were carried out in the Neuroscience Laboratory, individually.
The participants were randomly distributed, by one of the researchers, with block exchange at the rate of 1:1:1 using the online randomization program ( www.random.org ). After randomization, the generated codes were placed in sequential numbered, opaque envelopes and sealed in order to hide the allocation. These envelopes were delivered to the researcher responsible for neurostimulation the day before the start of the sessions. The outcome evaluators and patients were blinded to the type of stimulation applied and the person responsible for neurostimulation blinded to the performance achieved by patients in the evaluations.
As a friction, it was considered the fault in two sessions or a single fault without replacement. In addition to the insertion of medication for continuous use after the initial evaluation. In order to facilitate the participants' adherence to the study, flexible hours for appointments were organized. It was also allowed to miss a day of attendance, being replaced at the end of the sessions, in addition to making periodic calls in order to maintain contact and avoid evasion from the study.
The instruments used for data collection were: the Sociodemographic and Clinical Questionnaire to characterize the sample; the Cumulative Illness Rating Scale (CIRS) , for the analysis of existing comorbidities; the Visual Analogue Scale (VAS) , to check the level of pain at the time of the evaluation; the Mini Mental State Examination (MMSE) , to assess the participants' cognitive status and serve as an exclusion criterion from the study; the Beck Depression Inventory , to exclude participants with severe depression; the Beck Anxiety Inventory (BAI) , to verify that all were homogeneous in terms of anxiety level; and the electroencephalogram, to assess cortical electrical activity. The study phases are shown in Table .
The data collection process with the EEG was made from 32 electrodes placed on the scalp through an adjustable cap, following the EEG International System 10–20, with impedance below 20kΩ . The amplifier used was the ActiChamp, with a sampling rate of 500 Hz. During the collection, the participants sat comfortably in a chair and were instructed to avoid excessive body and eye movements, in addition to relaxing the mandibular musculature and avoiding muscle contractions in the face region, to decrease the presence of artifacts in the records during the acquisition of the data. Data was collected at rest, 6 min with the participant with eyes open and 6 min with eyes closed . The time was divided into 2 min separated by small intervals and repeated three times, ending in 12 min for the acquisition of data .
The consultations with the tDCS were performed in an individual session and the electrodes were placed in C3, which corresponds to the region of the primary motor cortex (M1), according to SI 10–20 of the EEG . The protocol used was 20 min of stimulation per day, the first group was stimulated for five consecutive days, and the second group for two weeks (excluding weekends), totaling 10 sessions. The protocol for sham stimulation was identical to the first group, but the device was turned off 30 s after the start of stimulation, so as not to induce clinical effects. The TCT research equipment was used, with electrodes wrapped in 5 × 7 cm sponges, moistened with saline (NaCl 0.9%). The applied current was 2 mA, the current density being equivalent to 0.05 A / m2. At the end of each session, the participants were asked about the experience of adverse effects, in order to monitor the safety of applying the current.
The power spectra of the frequency bands for the electrodes F3, F4, P3, P4, O1 and O2 were analyzed, each representing a cortical area—left and right frontal, left and right parietal, left and right occipital—respectively , according to the EEG International System 10–20. The analyzes were performed using EEGLAB, a MATLAB toolbox. In the pre-processing, the data filtering was done using the 0.5 Hz high pass and 30 Hz low pass filter. The average of the electrodes was used as a reference, in order to remove possible spatial biases , and, in sequence, the Multiple Artifact Rejection Algorithm—MARA for removing artifacts. Only the data corresponding to the participants with their eyes closed were processed .
Statistical analyses were carried out using the Software Statistical Package for the Social Sciences (SPSS) version 22.0 for Windows. First, descriptive analyses were performed, using measures of central tendency and dispersion, to characterize the sample. For inferential analyses, initially the Shapiro–Wilk test was used, which indicated that the data had a normal distribution. The chi-square test was used to compare groups with respect to the practical variables of physical activity, psychotherapy and medication use. The one-way ANOVA test was performed to verify homogeneity between groups before the start of treatment. For pre- and post-treatment evaluation, the ANOVA factorial statistical test with mixed design was used, referring to the three groups (Group 1, Group 2 and Sham) x two times (pre and post treatment). The level of significance considered was p < 0.05. For peer comparison, the Bonferroni-Sidak post hoc test was used. Finally, we used the effect size calculation from the partial eta squared for each variable within each group, with values of 01, 0.06, and > 0.14 reflecting small, medium, and large effects, respectively and Cohen´s d for comparisons between pairs, with values of 0.20, 0.50, and 0.80 reflecting small, medium, and large effects, respectively .
|
Myelin basic protein and neurofilament H in postmortem cerebrospinal fluid as surrogate markers of fatal traumatic brain injury | 47723237-d191-46d2-8a06-041065573fd9 | 8205912 | Histology[mh] | Traumatic brain injury (TBI), isolated or combined with other injuries, is a relevant post-traumatic prognostic factor for morbidity and mortality. In Germany, about 272,000 people suffer a TBI every year, and more than 5000 patients die as consequence . With an incidence of 332/100,000 inhabitants, TBI is even more common than stroke (215/100,000 inhabitants) . The prognosis of patients depends on the primary mechanical brain damage as well as on the development of secondary sequelae such as intracranial pressure increase, ischemia, and hypoxia . To more accurately assess this primary and secondary brain damage, the clinical use of central nervous system (CNS) biomarkers has been repeatedly tested to diagnose TBI and to better understand the orchestration of secondary responses. So far, mainly structural proteins of the cell compartments of the CNS in serum and cerebrospinal fluid (CSF) have been analyzed as markers of acute brain trauma [ – ]. Investigations of fatal TBI cases have always been a classical domain of forensic medicine with regard to traumatological and biomechanical aspects as well as in the contextual assessment . Currently, autopsy and histological examination of the traumatized tissue are the main investigations used in the forensic postmortem routine to evaluate lethality and survival time (wound age). In addition to forensic neuropathological diagnostic methods, postmortem biochemical analyses of various cytokines, acute phase proteins, CNS biomarkers [ , – ], or Na + -glucose transporters in CSF and brain tissue as well as investigations of the early tissue reaction of local microglia after trauma are meanwhile increasingly performed . Furthermore, the applicability of immunocytochemical staining in postmortem CSF could be demonstrated . Due to the extended length of axonal fiber tracks within the CNS, axons are particularly vulnerable to physical trauma of the brain tissue resulting in white matter damage . In acute demyelination, it was demonstrated that microglia as the major cellular component of the innate immune system in the CNS preferentially accrue in and monopolize the CNS lesion site in a direct and immediate immunological reaction (neuroinflammation) , but this tissue reaction provides no direct information regarding the amount of axonal injury in detail. Axonal injury commonly occurs in both focal and diffuse brain trauma due to shear forces and can be found in TBI of all severities . Thus, investigating biomarkers or proteins expressed mainly or exclusively in the axonal parts of neurons, e.g., the myelin sheath or the axonal cytoskeleton, might help to represent the axonal component of TBI pathology and supply biochemical answers to the physical trauma. Apart from myelin oligodendrocyte glycoprotein (MOG), myelin basic protein (MBP) is one of the most abundant proteins (30% protein content of myelin) in the white matter . It is a key structural component of the multi-layered myelin sheath covering nerve fibers. MBP maintains the correct structure of myelin, interacting with lipids in the myelin membrane . In myelinated fiber tracks of the white matter, MBP degradation by proteases such as calpain results in degradation of axons and the myelin sheath (demyelination) . Thus, under these conditions, MBP or its fragmented or degraded forms might be released into the extracellular matrix after TBI (see Fig. ) and thus can be measured in CSF. In human studies of adult and pediatric TBI patients, MBP was found to be elevated in serum and CSF post-traumatically during lifetime [ – ]. In the postmortem field, MBP was considered as an early marker of severe and moderate TBI in biochemical tests using CSF . Neurons of the CNS contain type IV intermediate filaments, also known as neurofilaments (NFs), which are composed of an assembly of three chains: a light chain (NF-L) weighing 68 kDa, an intermediate chain (NF-M) weighing 190 kDa, and a heavy chain (NF-H) weighing 210 kDa . NFs are a major cytoskeleton component and provide the structure and diameter of an axons . After axonal damage, NF chains may be dissociated from the cytoskeleton and released into the cytosol or possibly the extracellular fluid, especially if the cell membrane integrity is altered. Here, NFs could serve as biomarkers of traumatic axonal injury (see Fig. ). In serum and CSF levels of rats, NF-H was shown to correlate with the severity of a mechanical impact in an impact acceleration model . Zurek et al. reported the efficacy of serum NF-H measurement in predicting the injury type and outcome in children after TBI. In this study, levels of NF-H were significantly higher in patients with diffuse axonal injury (DAI) on initial CT scans compared to those without DAI . Vajtr et al. compared serum NF-H concentrations between DAI and focal injury showing that the median serum NF-H was higher in DAI compared to focal TBI. These findings seem to display a more specific role of NF-H in axonal injury , especially in distinguishing DAI from focal injury, forming the rationale for choosing NF-H as an example of different NFs in this study. Due to the multi-component pathology in TBI, it would be ideal to define biomarkers closely matching with various pathological processes including axonal components of TBI. This has not yet been done in forensic pathology studies up to now. The aim of the present study was, therefore, to biochemically investigate the potential use of MBP and NF-H as promising postmortem cerebral neuroinjury biomarkers for determining TBI as the cause of death compared to natural causes.
Sampling and processing CSF samples were collected by semi-sterile puncture of the suboccipital space during head evisceration in a total of 40 forensic autopsy cases. The samples are divided into cases with lethal TBI (total number n = 21, case characteristics are indicated in detail in Table ) and compared to a cohort of cardiovascular fatalities (CVF) as controls (total number n = 19; n = 7 sudden cardiac death, n = 9 acute myocardial infarction, n = 3 ruptured aortic aneurysm; sex, age, and post-mortem interval (PMI) distribution among controls in Table ). Trauma cases were collected with different survival times ranging between hours and weeks to cover a broader time interval of survival. The cases were derived from routine medicolegal autopsies performed at the Institute of Forensic Medicine of the University of Wuerzburg. Exclusion criteria for sampling were as follows: presence of former CNS injuries (“repetitive” trauma) or neurodegenerative diseases and putrefactive tissue changes. Police and medical records were used to obtain information regarding history of older CNS injuries. The local Ethics Committee has approved the study (local no. 203/15). The study included 15 females and 25 males ranging from 21 to 91 years with a PMI varying between 1 and 13 days. CSF samples were immediately centrifuged at 5000 rpm for 5 min at 4 °C, and the supernatants were aliquoted and stored at – 80 °C without any thawing cycles until analysis to allow for both cytological and biochemical analyses, respectively. CSF MBP and NF-H concentrations were measured using commercially available double-sandwich ELISA kits according to the manufacturers’ protocols ( MBP: Mybiosource, San Diego, USA; Cat. No. MBS261463; NF-H : Cusabio, Houston, USA; Cat. No. CSB-E16097h). In brief, standards and CSF samples were incubated in microplate wells precoated with anti-human MBP and anti-human NF-H antibodies. Then, they were incubated with a biotin-labelled anti-human MBP/anti-human NF-H antibody solution following incubation with a streptavidin–horseradish peroxidase conjugate. The plates were washed four times with washing buffer between each step. After the last washing step, the substrate was added. The reaction was stopped by adding an acidic solution called “stop solution” in the manufacturers’ protocols after 10 min. The absorbance of the resulting color product was measured by reading the ELISA plate at 450 nm. The concentrations of MBP/NF-H within the samples were then determined using the standard curve. The minimum detectable amount (limit of detection, LOD) was 5 pg/ml for MBP and 0.06 ng/ml for NF-H. MBP samples above the detection range (28.4–298.4 pg/ml) were diluted (1:3); NF-H samples above the detection range (0.1–56.6 ng/ml) were diluted (1:5) and then reanalyzed with the results multiplied by the appropriate dilution factor. All samples were assayed in duplicate, and the arithmetic mean of both results was used for statistical analysis. Furthermore, cortical and subcortical brain specimens (frontal lobes) of 10 of the cases (5 TBI cases with increasing survival times and 5 randomly chosen controls without morphological signs of traumatic injury on macroscopic or microscopic level) analyzed biochemically were collected during forensic autopsies and fixed in neutral buffered 10% formalin and then embedded in paraffin. After paraffinization, the wax blocks were sliced at 6 µm using a microtome. Consecutive sections were mounted on microscope slides and stained immunohistochemically, as previously described , with commercially available antibodies against MBP in a dilution of 1:40 (Diagnostic BioSystems, Pleasanton, USA), against NF-H in a dilution of 1:400 (Zytomed, Berlin, Germany), and against TMEM119 in a dilution of 1:1000 (Sigma, St. Louis, USA). Moreover, CSF cytospin preparations were stained immunocytochemically with the antibodies mentioned above using an identical dilution. The microphotographs of the brain sections and CSF cytospin preparations were taken with an Olympus DP 26 digital camera. Statistical analysis Case characteristics were collected and stored with Excel Version 16.15 (Microsoft Corporation, Redmond, USA), and GraphPad Prism software version 8 was used for statistical verification (GraphPad Software, La Jolla, USA). The D’Agostino & Pearson test was used to test the parametric distribution of the samples and the sample characteristics. The biomarker levels were then analyzed using an unpaired, two-sided t test for normally distributed data or a Mann–Whitney U test for non-normally distributed data when compared to controls and between different traumatic entities. Age and PMI between the groups were compared using Mann–Whitney U tests. Receiver operating characteristic (ROC) curves were plotted to evaluate the area under the curve and sensitivity and specificity values of thresholds. P values equal to or less than 0.05 were considered statistically significant. Mean values ± standard deviations are reported in the text.
CSF samples were collected by semi-sterile puncture of the suboccipital space during head evisceration in a total of 40 forensic autopsy cases. The samples are divided into cases with lethal TBI (total number n = 21, case characteristics are indicated in detail in Table ) and compared to a cohort of cardiovascular fatalities (CVF) as controls (total number n = 19; n = 7 sudden cardiac death, n = 9 acute myocardial infarction, n = 3 ruptured aortic aneurysm; sex, age, and post-mortem interval (PMI) distribution among controls in Table ). Trauma cases were collected with different survival times ranging between hours and weeks to cover a broader time interval of survival. The cases were derived from routine medicolegal autopsies performed at the Institute of Forensic Medicine of the University of Wuerzburg. Exclusion criteria for sampling were as follows: presence of former CNS injuries (“repetitive” trauma) or neurodegenerative diseases and putrefactive tissue changes. Police and medical records were used to obtain information regarding history of older CNS injuries. The local Ethics Committee has approved the study (local no. 203/15). The study included 15 females and 25 males ranging from 21 to 91 years with a PMI varying between 1 and 13 days. CSF samples were immediately centrifuged at 5000 rpm for 5 min at 4 °C, and the supernatants were aliquoted and stored at – 80 °C without any thawing cycles until analysis to allow for both cytological and biochemical analyses, respectively. CSF MBP and NF-H concentrations were measured using commercially available double-sandwich ELISA kits according to the manufacturers’ protocols ( MBP: Mybiosource, San Diego, USA; Cat. No. MBS261463; NF-H : Cusabio, Houston, USA; Cat. No. CSB-E16097h). In brief, standards and CSF samples were incubated in microplate wells precoated with anti-human MBP and anti-human NF-H antibodies. Then, they were incubated with a biotin-labelled anti-human MBP/anti-human NF-H antibody solution following incubation with a streptavidin–horseradish peroxidase conjugate. The plates were washed four times with washing buffer between each step. After the last washing step, the substrate was added. The reaction was stopped by adding an acidic solution called “stop solution” in the manufacturers’ protocols after 10 min. The absorbance of the resulting color product was measured by reading the ELISA plate at 450 nm. The concentrations of MBP/NF-H within the samples were then determined using the standard curve. The minimum detectable amount (limit of detection, LOD) was 5 pg/ml for MBP and 0.06 ng/ml for NF-H. MBP samples above the detection range (28.4–298.4 pg/ml) were diluted (1:3); NF-H samples above the detection range (0.1–56.6 ng/ml) were diluted (1:5) and then reanalyzed with the results multiplied by the appropriate dilution factor. All samples were assayed in duplicate, and the arithmetic mean of both results was used for statistical analysis. Furthermore, cortical and subcortical brain specimens (frontal lobes) of 10 of the cases (5 TBI cases with increasing survival times and 5 randomly chosen controls without morphological signs of traumatic injury on macroscopic or microscopic level) analyzed biochemically were collected during forensic autopsies and fixed in neutral buffered 10% formalin and then embedded in paraffin. After paraffinization, the wax blocks were sliced at 6 µm using a microtome. Consecutive sections were mounted on microscope slides and stained immunohistochemically, as previously described , with commercially available antibodies against MBP in a dilution of 1:40 (Diagnostic BioSystems, Pleasanton, USA), against NF-H in a dilution of 1:400 (Zytomed, Berlin, Germany), and against TMEM119 in a dilution of 1:1000 (Sigma, St. Louis, USA). Moreover, CSF cytospin preparations were stained immunocytochemically with the antibodies mentioned above using an identical dilution. The microphotographs of the brain sections and CSF cytospin preparations were taken with an Olympus DP 26 digital camera.
Case characteristics were collected and stored with Excel Version 16.15 (Microsoft Corporation, Redmond, USA), and GraphPad Prism software version 8 was used for statistical verification (GraphPad Software, La Jolla, USA). The D’Agostino & Pearson test was used to test the parametric distribution of the samples and the sample characteristics. The biomarker levels were then analyzed using an unpaired, two-sided t test for normally distributed data or a Mann–Whitney U test for non-normally distributed data when compared to controls and between different traumatic entities. Age and PMI between the groups were compared using Mann–Whitney U tests. Receiver operating characteristic (ROC) curves were plotted to evaluate the area under the curve and sensitivity and specificity values of thresholds. P values equal to or less than 0.05 were considered statistically significant. Mean values ± standard deviations are reported in the text.
Biomarker concentrations in CSF of fatal TBI cases were compared with acute cardiac death cases as a control group. While both groups statistically differed with respect to age of the deceased ( p < 0.05), they were matched for PMI ( p = 0.419) and gender distribution ( p = 0.745). The TBI cases studied here were statistically older than those of the controls. MBP concentrations are normally distributed within case and control groups (see Fig. ). TBI levels were significantly higher than in control cases ( p = 0.006). The mean MBP concentration in CSF in the TBI group is 159.6 pg/ml, while in controls, it is 93.4 pg/ml (see Fig. ). A conservative threshold of > 169 pg/ml of MBP is determined with a specificity of 94.7% and a sensitivity of 42.9% (area under the curve 0.7519, see Fig. ). MBP CSF levels > 169 pg/ml were thus 8 times more likely to be a TBI than a cardiovascular control. There are no significant differences between TBI cases with intracranial bleedings only compared to those with additional parenchymal bleedings such as cortical contusions ( p = 0.1346; see Supplemental Table ). For NF-H, there is a Gaussian distribution of readings in the case group; controls are non-parametrically distributed (see Fig. ). While CSF levels from control cases were largely close to LOD, the TBI group shows statistically highly significant elevated levels in CSF ( p < 0.0001, see Fig. ), but no differences regarding the TBI bleeding type ( p = 0.7240; see Supplemental Table ). With an area under the curve of 0.8446, a conservative threshold for NF-H is found to be 6 ng/ml (specificity 98.5%, sensitivity 81%, see Fig. ). To verify the measured levels in CSF, immunohistochemical staining against MBP as well as against NF-H was performed on randomly chosen cerebrum samples from cases also examined biochemically (5 TBI cases/5 controls). Compared with the control group (see Fig. ), in which there was no injury to the brain parenchyma in form of contusions or hemorrhages were identified, the TBI cases showed a visually reduced staining reaction against MBP (see Fig. ). Quantifications were not performed as part of this study. These stains were complemented by immunocytochemical staining of CSF against MBP. In this regard, MBP-positive phagocytic cells are detected in the CSF of four TBI cases, which showed a prolonged survival time of more than 24 h (see Fig. ), whereas the CSF cytochemical sections of control cases remained negative. On immunohistochemical staining against NF-H, the TBI cases showed repeatedly ruptured neurofilaments compared with the control group (see Fig. ). Immunocytochemical detection of NF-H in CSF failed despite multiple adaptations of the staining protocol (see Fig. ). When examining the response of the resident microglia and a potential interplay of microglial activation due to demyelination and axonal damage in the brain parenchyma, a marked activation of microglia in brain tissue is observed in the trauma cases (see Fig. ), while in the control group, predominantly ramified microglia are immunolabelled. In the CSF of the 5 TBI cases, numerous TMEM119-positive cells are detected in varying degrees of staining behavior (see Fig. ). Controls did not present TMEM119-positive cells in CSF.
In the present study focusing on biochemical determination of MBP and NF-H, both biomarkers evaluated here showed an increase in the CSF of individuals dying after TBI compared to cardiovascular controls. By determining their CSF concentration, it is thus possible to biochemically distinguish a TBI from a control group of corresponding deaths. A study by Olczak et al. demonstrated the suitability of MBP alongside GFAP and NF-L as early biomarkers of lethal TBIs, although the calculated conservative threshold of MBP in that study was higher (1356.74 ± 323.66 pg/ml) than in the present paper (169 pg/ml). A possible explanation for this discrepancy in threshold determination could be that the study material of Olczak et al. included not only fatal TBI cases but also cases with minor post-traumatic neuropathological findings. In principle, CSF, which communicates without barriers with the extracellular space of the central nervous system (CNS), i.e., with the milieu surrounding the neurons and glial cells, seems to be suitable to reflect central nervous processes such as biochemical processes in the brain after traumatization (“neuroforensomics”), not only because of its apparent postmortem stability . In addition, this makes it possible to investigate CSF even in cases without visible signs of impact, and it appears promising in the light of the results presented here to identify a possible central nervous involvement and to help clarify controversial causes and circumstances of death. Especially TBIs with a predominantly axonal component (TAI: traumatic axonal injury), clinically also referred to as DAI, may escape the expert’s attention in the (macroscopic and radiological) forensic assessment of central nervous involvement, because they leave no distinct morphological correlate in the tissue, which can be detected only by microscopical examination, for example, via the detection of “retraction bulbs and varicosities” or immunoreactivity to beta-amyloid precursor protein (β-APP) , which is said to be the gold standard for neuropathological identification of axonal injury , but require a longer survival time of the deceased and must be distinguished from secondary hypoxic-ischemic tissue changes . Axonal injury can be divided into primary axotomy and secondary axotomy. Primary axotomy is a mechanical breakage of an axon resulting from forces transmitted by traumatic impact , whereas secondary axotomy is delayed and occurs as a result of clinical manifestations seen in DAI. Rotational acceleration of the brain can cause stretching of white matter axons, leading to a dysregulation in sodium and potassium in- and efflux, respectively, culminating in an increase in intracellular calcium concentration with pleiotropic effects within the neuron . One effect involves stimulation of two systems: calpain-mediated necrosis and caspase-mediated apoptosis. Calpain-mediated proteolysis predominates in the initial phase of severe TBI to result in biomarker release during this phase when sampled in human CSF . Proteolytic activity results in disruption of the axonal cytoskeleton and degradation of structural proteins such as neurofilaments, MBP, Tau protein, amyloid protein, and spectrin breakdown products (SBDP) [ – ]. Due to the fact that these biomarkers are accepted to arise directly from axons, they could be a reflection of the axonal component of the TBI pathology and thus an indirect reference to TAI. There are several other biomarkers that have been studied, such as GFAP, NSE, and S-100B. While they are all of relevance to TBI, their cellular expression patterns indicate that they share no direct conceptual link with the axon itself . In the present paper, biochemical measurement of the axon-specific biomarkers MBP and NF-H in CSF turned out to be very suitable to distinguish TBI from a control group, reflecting the share of TAI in the lethal TBI cases studied on the basis of the immunohistochemically displayed expression patterns of both structural markers. The detection of increased MBP und NF-H levels in CSF after TBI has to be considered carefully. Due to the fact that these biomarkers are accepted to arise directly from axons, they could be a reflection of the axonal component of TBI as well as of other forms of ischemic injury. However, more research is needed to differentiate traumatic axonal injury from global ischemia, and further studies should include hypertensive brain hemorrhage and ischemic brain infarction data to show the effects of hypoxia of brain tissue without traumatic impact on CSF biomarker levels. Since the orchestration of neuroinflammation after TBI is multiform and complex , a multi-methodological approach was used in the present study in addition to the primarily biochemical investigation of CSF, viz., immunohistochemistry of traumatized brain tissues or immunocytochemistry of CSF, to additively confirm the biochemical evidence of a potential traumatization of the brain parenchyma. Elevated MBP and NF-H levels in CSF were associated with a reduced staining response of the myelin sheath in demyelination and the presence of ruptured neurofilaments, respectively , whereas control cases with low MBP and NF-H CSF levels did not show comparable immunohistochemical changes in the white matter. The observation of increased MBP and NF-H levels of CSF after TBI was further supported by the fact that concomitant activation of microglia could be demonstrated in the respective brain tissue, but in the control cases, and thus with low values of MBP and NF-H, more ramified microglia were detected in CSF. As already mentioned in previous publications, a special role as a stimulus of the resident microglia was attributed to the released myelin after damage of the myelin sheath with subsequent activation . In addition to a very early response of microglia after TBI, one of our own publications also showed activation of the so-called M2 microglia/macrophages, which contribute to the regeneration of injured brain tissue through their phagocytic activity . Demyelination, such as after TBI, results in an increased release of lipid components such as phospholipids and cholesterol, which are major constituents of myelin and are phagocytosed. This may explain the presence of “fat-containing” macrophages . Up to now, the literature contains little information as to when these “fat-containing” macrophages begin to appear . According to Oehmichen et al., they were detected after 17 h, after 5–6 days, and in one case even 30 years after, a TBI had occurred . Thus, the detection of “fat-containing” macrophages in the tissue may support the observation in TBIs with a prolonged survival time to find MBP-positive macrophages in CSF, as also illustrated here. Moreover, their detection in the CSF seems to be not only due to a longer survival time but also to a scenario of a “lagging behind” of the CSF, as a result of a so-called passage delay, to the brain parenchyma, which could be repeatedly observed in our cases, since phagocytosis primarily takes place in the tissue, i.e., at the site of direct trauma, and the phagocytically active cells can only afterwards enter the CSF. Thus, the time passing before entry into the extracellular space could play a role before CSF immunocytochemistry will return positive for here discussed axonal biomarkers. This potential time latency is currently being investigated in an accompanying study and will be reported separately. NF-H could not be detected by immunocytochemical staining of CSF. A possible explanation for this could be that in the rat brain, a postmortem accumulation of neurofilaments takes place in the perikaryon . In addition, other studies reported that NF-H is particularly stable compared with NF-M/L because it has the highest degree of phosphorylation (dephosphorylation increases sensitivity to enzymatic degradation) and the ability to bind to calmodulin . For this reason, it seems hypothetically conceivable that in the course of traumatization and the associated damage to the axonal cytoskeleton, there is a temporary storage of NF-H and other neurofilaments in the perikaryon, which, however, is interrupted by the progressive neuronal damage, thus leading to the fact that the neurofilaments, bypassing phagocytosis or consciously using other transport mechanisms, which requires a living organism (vital reaction), reach the extracellular space, where they can be detected biochemically in CSF, but remain masked for methods like immunocytochemistry.
In our study, we used, on the one hand, a heterogeneous study material with a large and statistically divergent age range and different postmortem intervals, but representing our daily autopsy material. On the other hand, factors such as the ambient temperature of the body at the time of death, freezing of CSF samples for storage until measurement, as well as undetected neurodegenerative diseases or past minor traumas, may influence the concentration levels of the measured biomarkers. We tried to rule this out by strict sample selection with exclusion of chronic neurodegenerative diseases and repetitive trauma. To establish the relationship between postmortem MBP und NF-H levels and brain tissue damage as sign of TBI including axonal components, further investigations are necessary to differentiate between direct traumatic axonal damage and secondary ischemic injury. Attempts were made to cohort different survival times in the TBI group in order to arrive at a conclusion about basic post-traumatic changes. A comparison of the measured values with each other with regard to the length of survival time has not been performed due to the small number of cases. At the beginning of the present study, a control group of cardiovascular fatalities was defined to allow a representative comparison of one of the most common causes of (natural) death in the forensic autopsy material with TBI cases. However, this preselection does not allow to compare the results uncritically with other causes of death, such as hypoxia following strangulation or cerebral hemorrhage from an internal cause. Additionally, these control cases chosen might be influenced by heterogeneous effects such as different times of agony. In particular, the cases of acute myocardial infarction could present a long(er) period of agony which could play a role in secondary CNS ischemia. The definition of a control group was necessary to keep the study within an economically reasonable scope. The immunohistochemical and immunocytochemical examinations were also only possible on a representative basis due to budget constraints, and further studies should include immunoreactivity of β-APP for the aforementioned reasons.
In conclusion, the present study focusing on postmortem biochemical analysis demonstrated that MBP and NF-H are promising cerebral neuroinjury biomarkers that appear suitable to differentiate TBI from cardiovascular death. The multi-methodological approach via immunohistochemical and immunocytochemical staining can help to verify biochemical results and supplies an additional tool in the forensic neuropathological interpretation of TBIs.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 20 KB)
|
Person-centred integrated primary care for refugees: a mixed-methods, stepped wedge design study to assess the impact | 74907ab8-bcb1-440a-96b0-35ce46349eb7 | 11883791 | Patient-Centered Care[mh] | Since 2015 some 56.9 thousand refugees settled in Dutch municipalities after being granted a refugee staying permit by the Dutch government (CBS (Dutch Central Bureau of Statistics), ). About 35% of the refugees are under the age of 18. After settling, they are enlisted in a general practice, as is the case with all Dutch people. In the Netherlands, the general practitioner (GP) is a gatekeeper to the healthcare system and the first point of contact for all health-related problems, including mental health problems for which also mental health practice nurses are available in GP to provide support and treatment. In case of severe mental health problems, patients are referred to specialist mental healthcare. Health insurance is mandatory for all Dutch citizens and covers costs for specialist care, including specialist mental healthcare; each year people have to pay the first costs (appr. 300 euros) themselves, except for general practice, which is free of charge. Youth care is also free of cost and is paid for by the municipality. The traumatic experience of organized violence has been identified as a significant risk factor for mental health problems, like post-traumatic stress disorder (PTSD), depression, and anxiety disorders (Alisic et al. , , Dangmann et al. , ). Prevalences of these problems vary widely between refugee groups and studies but are much higher than among non-refugee youngsters. For instance, the prevalence of PTSD among refugee children is estimated at 19–53%, compared to 16% in other children who experienced trauma (Dangmann et al. , ); depression is seen in 14% of refugee children worldwide compared to 3% in other children (Dangmann et al. , ). However, traumatic experiences are not the only nor major determinant of mental health of refugees. Other sources of chronic stress like insufficient household income and social exclusion have major long-term effects on health. They can lead to behavioural problems, sleeping disorders, eating problems, generalized pain, or bedwetting (Heptinstall et al. , ; Ehntholt & Yule, ; Bronstein & Montgomery, ; Pacione et al. , ; Dangmann et al. , ). As such, the well-being of children is related to that of their parents or guardians (Summerfield, ; Fazel et al. , ; Hirani et al. , ). Mental distress and mental health problems in refugee minors therefore are relevant for all health professionals and GPs in particular. GPs could have played a key role in the recognition of these problems in refugee minors. Yet there are indications of underdiagnosis of mental health problems in refugees and particularly refugee minors (Lamkaddem et al. , ; Dagevos et al. , ; Hodes & Vostanis, ). It seems more difficult to recognize mental distress and mental health problems across language and cultural differences, especially in groups that are not used to talking about these problems. Therefore, the two-year Empowerment programme was developed and implemented in four general practices, to increase the awareness and skills of GPs to recognize, discuss, and attend to mental distress and health problems in refugee minors. In this mixed-methods study, we evaluated this programme and aimed to answer the following research questions: Does a programme aimed at improving culturally sensitive person-centred integrated care and interprofessional collaboration in general practice increase the recognition, discussion, and guidance of mental distress and health problems in refugee youth? We hypothesized first that before the implementation of the programme the number of general practice consultations with refugee minors in which mental health is being discussed would be lower than in other minors, and second that this number would be increased after the implementation of the programme. How is the programme experienced by the GP staff and others involved in the programme?
Setting From September 2019 until September 2021, four general practices in four different municipalities in South-Eastern Netherlands engaged in the Empowerment project (Radboud University, ). This project aimed to improve the recognition, discussion, and guidance of mental distress in refugee children. Based on literature and interviews with refugees, doctors, and mental healthcare nurses (MHNs) in general practice, as well as other professionals involved in the support of refugee children, we developed training and guidance for culturally sensitive person-centred care for GP staff (see box 1 for the content of the Empowerment programme). After the training, with the help of the guidance, the GPs and MHNs started their part of the intervention that existed of an extensive introductory meeting with each refugee family in their practice. In the introductory meeting with the refugee parents, sometimes in the presence of their children, attention was paid to medical problems, but also to the family composition and history, and their social and financial circumstances. Besides, in the four participating municipalities meetings were organized involving the GP/MHN, refugee representatives and organizations in the field of support of refugee minors. The goal of these meetings was to strengthen interprofessional collaboration and psychosocial support tailored to the refugees’ needs. The implementation of the programme was hampered by multiple lockdowns due to the SARS-COVID-19 pandemic, which also burdened general practices. Design This mixed-method study consisted of a quantitative cohort study to answer research question one and qualitative semi-structured interviews to answer research question two. For this report, we used a checklist specifically for mixed-methods studies (the Fetters, M. D., & Molina-Azorin, J. F. ( ). In this section, we first describe the methods applied for the quantitative cohort study and then those applied for the qualitative interviews Cohort study Design We studied patient records in four general practices. We compared the number of GP consultations in which mental health was discussed in refugee minors with the number of these consultations in other minors before (from 01-09-2014 to 01-09-2019) and after the implementation of the Empowerment programme (from 1-1-2020 to 1-9-2021). We chose to study a five-year period prior to the implementation of our programme, as we wanted a substantial number of consultations to assess whether or not the number of discussions on mental health was lower in refugee youth (our first hypothesis). After the implementation of the intervention, our possibilities for evaluation were limited to two years. However, we adjusted our results to patient-years, to be able to compare both periods. The implementation of the programme (01-09-2019 till 01-09-2021) was performed as a stepped wedge design study. In a stepped wedge design, every cluster starts with a control period. Then, each cluster starts with the intervention (in this case the Empowerment programme) at a different time. At the end of the stepped wedge design study, all clusters had implemented the intervention (Zhan et al. , ). This resulted in the following five steps (see Figure ). Due to the COVID-19 pandemic; the start of practices 2, 3, and 4 was delayed with a shorter post-intervention period as a result. Study population The study population consisted of all minor refugee patients, that is, children of parents who both came to the Netherlands as refugees less than 10 years ago and registered with the participating four GP practices on 1 September 2014. Their data were manually selected from the patient records, based on surname/country of birth. In doubt, the researcher asked the GP whether the patient indeed was a refugee. In this case-control design, we matched each refugee minor in the participating practice to the first control minor of the same gender and age group in that practice, of whom at least one parent was born in the Netherlands. After identification, the patient records were anonymized. Data collection cohort study The following information was extracted from the patient records: Age and gender, country of origin of parents, number of consultations between 1-9-2014 and 1-9-2021, divided into the period before and after the start of the Empowerment programme in that particular practice; diagnoses coded according to the ICPC (International Classification of Primary Care coding); specific ICPC codes of the P category (referring to psychosocial problems/mental distress); and mentioning of discussion of mental health issues and referrals to mental healthcare or social care. Data analysis cohort study The data from the first five years (2014–2019) were descriptively analysed before the start of the study to answer our first hypothesis. Mean and standard deviation (std) or median and interquartile range for continuous characteristics and number and percentages for categorical characteristics were determined. The difference in the number of consults and diagnoses between the refugee group and the control group was tested by the incidence rate ratio (IRR). For the second hypothesis, we used a mixed-effect logistic model with practice as a random factor and group (intervention/control period) and step as fixed effects in the model. The difference in percentages of consultations and discussions about mental health before and after the start of the Empowerment programme was expressed as an odds ratio (OR) with 95% confidence interval. A value of p < 0.05 was considered statistically significant for all analyses, based on two-sided testing. Analyses were performed using the Statistical Package for Social Sciences (SPSS, IBM Corp., Armonk, NY) version 25. Qualitative semi-structured interviews Design In order to develop an intervention tailored to the needs of refugees and GPs, at the beginning of the study, we interviewed 15 refugee parents, 6 GPs, and 4 MHNs to elicit their experiences with refugee children and mental distress or mental health problems. Study population for the qualitative semi-structured interviews Refugee participants were recruited from the network of the authors (ML, JR, MvdM, MdK) through purposive sampling, striving for diversity regarding gender, age, educational background, and country of origin. Before deciding whether to participate, all participants received elaborate information about the goals, methods, and procedures of the study. The four participating general practices, with a total of seven doctors and four MHNs, were also recruited through the network of the authors (BW and MvdM). Before the start of the study, all but one (one doctor was not available at that time) were interviewed. The participants in the local interprofessional collaboration groups were recruited by the local practices. For our interviews about the experiences with the Empowerment programme after the intervention, we recruited a convenience sample of in total 12 participants (five GPs and two MHNs, four other healthcare or social workers and one refugee representative who all had participated in the local interprofessional collaboration groups). Data collection and analysis of the qualitative interviews The topic guide for the interviews before the start of the intervention, based on literature and expert opinion, contained questions about experiences with and knowledge of mental distress and mental health problems, health-seeking behaviour of refugees, barriers, and facilitators in accessing and providing care and experiences with GP care for refugees. The topic guide for the interviews on experiences with the programme, also based on literature and expert opinion and on the pre-intervention interviews, contained questions about the content and provision of the training, the guidance, the practicalities of the implementation of the guidance in practice, the self-assessed ability to address the needs of refugee patients and possible improvement in this after the implementation of the programme, and the experiences with interprofessional collaboration before and after the implementation of the programme. The interviews were performed by several researchers (authors RÇ, BW, JR, and MdK). The interviews were recorded and transcribed ad verbatim using F4 software. Data of respondents will be stored for 15 years at the research location of the Radboud University Medical Centre. All transcripts were carefully read by the researchers and inductively coded, using ATLAS.ti software (version 8.4.20). To secure data validity, all interviews were double-coded by at least two researchers and differences were discussed until agreement was reached. The codes were merged into overarching categories and themes.
From September 2019 until September 2021, four general practices in four different municipalities in South-Eastern Netherlands engaged in the Empowerment project (Radboud University, ). This project aimed to improve the recognition, discussion, and guidance of mental distress in refugee children. Based on literature and interviews with refugees, doctors, and mental healthcare nurses (MHNs) in general practice, as well as other professionals involved in the support of refugee children, we developed training and guidance for culturally sensitive person-centred care for GP staff (see box 1 for the content of the Empowerment programme). After the training, with the help of the guidance, the GPs and MHNs started their part of the intervention that existed of an extensive introductory meeting with each refugee family in their practice. In the introductory meeting with the refugee parents, sometimes in the presence of their children, attention was paid to medical problems, but also to the family composition and history, and their social and financial circumstances. Besides, in the four participating municipalities meetings were organized involving the GP/MHN, refugee representatives and organizations in the field of support of refugee minors. The goal of these meetings was to strengthen interprofessional collaboration and psychosocial support tailored to the refugees’ needs. The implementation of the programme was hampered by multiple lockdowns due to the SARS-COVID-19 pandemic, which also burdened general practices.
This mixed-method study consisted of a quantitative cohort study to answer research question one and qualitative semi-structured interviews to answer research question two. For this report, we used a checklist specifically for mixed-methods studies (the Fetters, M. D., & Molina-Azorin, J. F. ( ). In this section, we first describe the methods applied for the quantitative cohort study and then those applied for the qualitative interviews Cohort study Design We studied patient records in four general practices. We compared the number of GP consultations in which mental health was discussed in refugee minors with the number of these consultations in other minors before (from 01-09-2014 to 01-09-2019) and after the implementation of the Empowerment programme (from 1-1-2020 to 1-9-2021). We chose to study a five-year period prior to the implementation of our programme, as we wanted a substantial number of consultations to assess whether or not the number of discussions on mental health was lower in refugee youth (our first hypothesis). After the implementation of the intervention, our possibilities for evaluation were limited to two years. However, we adjusted our results to patient-years, to be able to compare both periods. The implementation of the programme (01-09-2019 till 01-09-2021) was performed as a stepped wedge design study. In a stepped wedge design, every cluster starts with a control period. Then, each cluster starts with the intervention (in this case the Empowerment programme) at a different time. At the end of the stepped wedge design study, all clusters had implemented the intervention (Zhan et al. , ). This resulted in the following five steps (see Figure ). Due to the COVID-19 pandemic; the start of practices 2, 3, and 4 was delayed with a shorter post-intervention period as a result. Study population The study population consisted of all minor refugee patients, that is, children of parents who both came to the Netherlands as refugees less than 10 years ago and registered with the participating four GP practices on 1 September 2014. Their data were manually selected from the patient records, based on surname/country of birth. In doubt, the researcher asked the GP whether the patient indeed was a refugee. In this case-control design, we matched each refugee minor in the participating practice to the first control minor of the same gender and age group in that practice, of whom at least one parent was born in the Netherlands. After identification, the patient records were anonymized. Data collection cohort study The following information was extracted from the patient records: Age and gender, country of origin of parents, number of consultations between 1-9-2014 and 1-9-2021, divided into the period before and after the start of the Empowerment programme in that particular practice; diagnoses coded according to the ICPC (International Classification of Primary Care coding); specific ICPC codes of the P category (referring to psychosocial problems/mental distress); and mentioning of discussion of mental health issues and referrals to mental healthcare or social care. Data analysis cohort study The data from the first five years (2014–2019) were descriptively analysed before the start of the study to answer our first hypothesis. Mean and standard deviation (std) or median and interquartile range for continuous characteristics and number and percentages for categorical characteristics were determined. The difference in the number of consults and diagnoses between the refugee group and the control group was tested by the incidence rate ratio (IRR). For the second hypothesis, we used a mixed-effect logistic model with practice as a random factor and group (intervention/control period) and step as fixed effects in the model. The difference in percentages of consultations and discussions about mental health before and after the start of the Empowerment programme was expressed as an odds ratio (OR) with 95% confidence interval. A value of p < 0.05 was considered statistically significant for all analyses, based on two-sided testing. Analyses were performed using the Statistical Package for Social Sciences (SPSS, IBM Corp., Armonk, NY) version 25. Qualitative semi-structured interviews Design In order to develop an intervention tailored to the needs of refugees and GPs, at the beginning of the study, we interviewed 15 refugee parents, 6 GPs, and 4 MHNs to elicit their experiences with refugee children and mental distress or mental health problems. Study population for the qualitative semi-structured interviews Refugee participants were recruited from the network of the authors (ML, JR, MvdM, MdK) through purposive sampling, striving for diversity regarding gender, age, educational background, and country of origin. Before deciding whether to participate, all participants received elaborate information about the goals, methods, and procedures of the study. The four participating general practices, with a total of seven doctors and four MHNs, were also recruited through the network of the authors (BW and MvdM). Before the start of the study, all but one (one doctor was not available at that time) were interviewed. The participants in the local interprofessional collaboration groups were recruited by the local practices. For our interviews about the experiences with the Empowerment programme after the intervention, we recruited a convenience sample of in total 12 participants (five GPs and two MHNs, four other healthcare or social workers and one refugee representative who all had participated in the local interprofessional collaboration groups). Data collection and analysis of the qualitative interviews The topic guide for the interviews before the start of the intervention, based on literature and expert opinion, contained questions about experiences with and knowledge of mental distress and mental health problems, health-seeking behaviour of refugees, barriers, and facilitators in accessing and providing care and experiences with GP care for refugees. The topic guide for the interviews on experiences with the programme, also based on literature and expert opinion and on the pre-intervention interviews, contained questions about the content and provision of the training, the guidance, the practicalities of the implementation of the guidance in practice, the self-assessed ability to address the needs of refugee patients and possible improvement in this after the implementation of the programme, and the experiences with interprofessional collaboration before and after the implementation of the programme. The interviews were performed by several researchers (authors RÇ, BW, JR, and MdK). The interviews were recorded and transcribed ad verbatim using F4 software. Data of respondents will be stored for 15 years at the research location of the Radboud University Medical Centre. All transcripts were carefully read by the researchers and inductively coded, using ATLAS.ti software (version 8.4.20). To secure data validity, all interviews were double-coded by at least two researchers and differences were discussed until agreement was reached. The codes were merged into overarching categories and themes.
Design We studied patient records in four general practices. We compared the number of GP consultations in which mental health was discussed in refugee minors with the number of these consultations in other minors before (from 01-09-2014 to 01-09-2019) and after the implementation of the Empowerment programme (from 1-1-2020 to 1-9-2021). We chose to study a five-year period prior to the implementation of our programme, as we wanted a substantial number of consultations to assess whether or not the number of discussions on mental health was lower in refugee youth (our first hypothesis). After the implementation of the intervention, our possibilities for evaluation were limited to two years. However, we adjusted our results to patient-years, to be able to compare both periods. The implementation of the programme (01-09-2019 till 01-09-2021) was performed as a stepped wedge design study. In a stepped wedge design, every cluster starts with a control period. Then, each cluster starts with the intervention (in this case the Empowerment programme) at a different time. At the end of the stepped wedge design study, all clusters had implemented the intervention (Zhan et al. , ). This resulted in the following five steps (see Figure ). Due to the COVID-19 pandemic; the start of practices 2, 3, and 4 was delayed with a shorter post-intervention period as a result.
We studied patient records in four general practices. We compared the number of GP consultations in which mental health was discussed in refugee minors with the number of these consultations in other minors before (from 01-09-2014 to 01-09-2019) and after the implementation of the Empowerment programme (from 1-1-2020 to 1-9-2021). We chose to study a five-year period prior to the implementation of our programme, as we wanted a substantial number of consultations to assess whether or not the number of discussions on mental health was lower in refugee youth (our first hypothesis). After the implementation of the intervention, our possibilities for evaluation were limited to two years. However, we adjusted our results to patient-years, to be able to compare both periods. The implementation of the programme (01-09-2019 till 01-09-2021) was performed as a stepped wedge design study. In a stepped wedge design, every cluster starts with a control period. Then, each cluster starts with the intervention (in this case the Empowerment programme) at a different time. At the end of the stepped wedge design study, all clusters had implemented the intervention (Zhan et al. , ). This resulted in the following five steps (see Figure ). Due to the COVID-19 pandemic; the start of practices 2, 3, and 4 was delayed with a shorter post-intervention period as a result.
The study population consisted of all minor refugee patients, that is, children of parents who both came to the Netherlands as refugees less than 10 years ago and registered with the participating four GP practices on 1 September 2014. Their data were manually selected from the patient records, based on surname/country of birth. In doubt, the researcher asked the GP whether the patient indeed was a refugee. In this case-control design, we matched each refugee minor in the participating practice to the first control minor of the same gender and age group in that practice, of whom at least one parent was born in the Netherlands. After identification, the patient records were anonymized.
The following information was extracted from the patient records: Age and gender, country of origin of parents, number of consultations between 1-9-2014 and 1-9-2021, divided into the period before and after the start of the Empowerment programme in that particular practice; diagnoses coded according to the ICPC (International Classification of Primary Care coding); specific ICPC codes of the P category (referring to psychosocial problems/mental distress); and mentioning of discussion of mental health issues and referrals to mental healthcare or social care.
The data from the first five years (2014–2019) were descriptively analysed before the start of the study to answer our first hypothesis. Mean and standard deviation (std) or median and interquartile range for continuous characteristics and number and percentages for categorical characteristics were determined. The difference in the number of consults and diagnoses between the refugee group and the control group was tested by the incidence rate ratio (IRR). For the second hypothesis, we used a mixed-effect logistic model with practice as a random factor and group (intervention/control period) and step as fixed effects in the model. The difference in percentages of consultations and discussions about mental health before and after the start of the Empowerment programme was expressed as an odds ratio (OR) with 95% confidence interval. A value of p < 0.05 was considered statistically significant for all analyses, based on two-sided testing. Analyses were performed using the Statistical Package for Social Sciences (SPSS, IBM Corp., Armonk, NY) version 25.
Design In order to develop an intervention tailored to the needs of refugees and GPs, at the beginning of the study, we interviewed 15 refugee parents, 6 GPs, and 4 MHNs to elicit their experiences with refugee children and mental distress or mental health problems.
In order to develop an intervention tailored to the needs of refugees and GPs, at the beginning of the study, we interviewed 15 refugee parents, 6 GPs, and 4 MHNs to elicit their experiences with refugee children and mental distress or mental health problems.
Refugee participants were recruited from the network of the authors (ML, JR, MvdM, MdK) through purposive sampling, striving for diversity regarding gender, age, educational background, and country of origin. Before deciding whether to participate, all participants received elaborate information about the goals, methods, and procedures of the study. The four participating general practices, with a total of seven doctors and four MHNs, were also recruited through the network of the authors (BW and MvdM). Before the start of the study, all but one (one doctor was not available at that time) were interviewed. The participants in the local interprofessional collaboration groups were recruited by the local practices. For our interviews about the experiences with the Empowerment programme after the intervention, we recruited a convenience sample of in total 12 participants (five GPs and two MHNs, four other healthcare or social workers and one refugee representative who all had participated in the local interprofessional collaboration groups).
The topic guide for the interviews before the start of the intervention, based on literature and expert opinion, contained questions about experiences with and knowledge of mental distress and mental health problems, health-seeking behaviour of refugees, barriers, and facilitators in accessing and providing care and experiences with GP care for refugees. The topic guide for the interviews on experiences with the programme, also based on literature and expert opinion and on the pre-intervention interviews, contained questions about the content and provision of the training, the guidance, the practicalities of the implementation of the guidance in practice, the self-assessed ability to address the needs of refugee patients and possible improvement in this after the implementation of the programme, and the experiences with interprofessional collaboration before and after the implementation of the programme. The interviews were performed by several researchers (authors RÇ, BW, JR, and MdK). The interviews were recorded and transcribed ad verbatim using F4 software. Data of respondents will be stored for 15 years at the research location of the Radboud University Medical Centre. All transcripts were carefully read by the researchers and inductively coded, using ATLAS.ti software (version 8.4.20). To secure data validity, all interviews were double-coded by at least two researchers and differences were discussed until agreement was reached. The codes were merged into overarching categories and themes.
Cohort study Characteristics of the study population In total, 152 refugee minors from 72 families were enlisted in the four participating general practices, with in total 16,394 patients on their practice list: GP1 (total practice list 7108 patients): 65 refugees (28 families); GP2 (total practice list 4265 patients): 49 refugees (25 families); GP3 (total practice list 2849 patients): 34 refugees (17 families); and GP4 (total practice list 2172 patients): 4 refugees (2 families). Of these refugee minors, 57% were male and 43% female. The refugees originated from 21 different countries. Most parents came from Syria (52.0%) or Eritrea (8.6%). Nearly a quarter (22.7 %) of the refugee minors were born after their parents arrived in the Netherlands. In the five years prior to the intervention (01-09-2014 to 01-09-2019), there were in total of 1023 consultations with 152 refugee minors (1.4 per minor per year), compared to 1677 consultations in the control group (2.2 per minor per year). During the two-year intervention period, 24 refugee minors and 19 control minors moved so their records could not be longer included in the evaluation. No new refugee minors were registered in the participating practices. After the implementation of the intervention, the number of consultations with refugee minors increased to a total of 604 consultations with 116 refugee minors (out of the total of 128 enlisted) (2.6 per minor per year), compared to 561 consultations in 117 controls (out of the 133 enlisted) (2.4 per minor per year). Number of discussions about mental health and P-diagnoses before and after the intervention In the five years before the start of the intervention, significantly fewer discussions of mental health were registered in the refugee group: 16 discussions per 1000 patient-years, compared to 38 discussions per 1000 patient-years in the control group (IRR 2.89 [95% CI 1.43, 6.21], p = 0.0046) (see Table ). In this period before the start of the intervention, also significantly less often a P-diagnosis (psychological distress or problem) was registered in the refugee children: 70 diagnoses/1000 patient-years compared to 128 diagnoses/1000 patient-years in the controls (IRR 1.83 [95% CI 1.30, 2.61], p = 0.0003) (see Table ). This confirmed our first hypothesis. In the two years after the start of the Empowerment programme, the percentage of discussions about mental health within the refugee minor groups increased from 16 to 47 discussions per 1000 patient-years (from 8% of all children in 5 years to 9 % in 2 years) (OR = 1.21 [95% CI 0.52, 2.79], p = 0.66), although it also increased in the control group from 38 to 71 discussions per 1000 patient-years (but in that group, the percentage of children with whom mental health was discussed decreased from 19% to 14%) (OR = 0.71 [95% CI 0.38, 1.33] p = 0.28) (see Table ). During this intervention period, also the number of refugee children who received a P (psychological) diagnoses increased: 22 of the 128 refugees received in total 29 P-diagnoses (115 diagnoses/1000 patient-years), compared to a still higher number of controls: 27 of the 133 controls received a total of 58 P-diagnoses (216 diagnoses/1000 patient-years) (IRR 1.88 [95% CI 1.18, 3.05], p = 0.0046) (see Table and Supplementary Tables S1 and S2 ). So compared with the period before the intervention, there is an increase in the number of mental health discussions and of P-diagnoses in the refugee group, relatively more than in the control group. This is in concordance with our second hypothesis. Experiences of refugees In order to develop an intervention tailored to the needs of refugees, we interviewed 21 refugee parents (see Table ) from eight different countries about their experiences with mental distress in their children and with healthcare in the Netherlands, in particular their experiences and wishes regarding their GP. These interviews showed that refugees and their children experience a lot of mental distress related to the traumas and difficulties they and their parents have experienced in their country of origin as well as now in the Netherlands. They initially seek support and help from family or religion or by engaging in distracting activities and only as a last resort do they turn for help to their GP. They experience barriers in accessing the GP practice and in discussing psychological problems with their GP. The most important barriers are the lack of an interpreter, the business-like approach of the GP, and the limited time available. Refugee parents also experience shame in discussing psychological problems with outsiders. ‘Look, I have not had an easy husband, he has been in prison and experienced war in our country of origin, his father and brother were murdered in front of his eyes. So my husband was confused, traumatized and unstable. And my children were very vulnerable to that. They could easily go the wrong way. As a mother you then have to be extremely strong’. (Afghan woman) ‘We noticed here in children who are 12 or 13 years old that they are now stuck between the country of origin and the Netherlands. If they behave like the people in the country of origin, then that is not accepted by the Dutch people. If they act like the Dutch, they will not be accepted by their parents and the community from their country of origin. So they are stuck in between. They don’t know what to do, they feel lost on the road’. (Eritrean man) ‘If the general practitioner wants to improve something for foreigners, the first thing is the language. I have been here for more than four years, but I still do not dare to go to the general practitioner immediately, because I do not know how to explain my complaints in ten minutes, that is not enough time’. (Syrian woman) ‘Our culture is closed, we do not want our problems to be known and talk about it. If an adult is stressed, he will not say so and eventually it will get worse. So the culture prevents it’. (Eritrean man) Experiences of GP staff and other stakeholders The interviews with six GPs and four MHNs from the participating practices (see Table ) before the start of the Empowerment programme showed that they were aware mental distress and mental health problems are common in refugee children. However, they experience barriers in discussing these problems. The most important barriers mentioned by professionals were refugees’ limited understanding and mistrust of the Dutch healthcare (system), language barriers, the limited time of professionals, expected cultural differences, and the fact that a physical complaint is often the reason for consultation although the origin might be mental distress. ‘You just feel powerless. you feel there are a lot of issues in the life of this refugee patient, but it is difficult to ask about, given the language barrier, and then this cultural thing you do not know about’. (MHN1) ‘Patients from other countries, I see, they are more often body oriented, do they have more physical complaints, and then you ask about some psychosocial topics and there are many problems, so you think ‘of course you are not sleeping well’. (GP3) Our interviews with GPs and MHNs after the implementation of the programme showed that all three elements of the intervention were equally important: the training was a necessary start, as it raised awareness as well as provided the skills for culturally sensitive communication about mental distress; the booster sessions helped them to solve difficulties they encountered in providing care for refugees with mental distress; and the practice guidance was experienced as equally important as it was easy to consult (it was in the form of a poster) and contained telephone numbers of interpreter services and of locally available support and mental healthcare services. It also contained detailed guidance for the introductory meeting that was advised to build trust and get to know the refugee family, but due to time restraints, these introductory meetings were not held with all refugees. However, when they were held, they were experienced as very effective. Both GPs and MHNs felt that this introductory meeting helped to develop mutual understanding and a relationship of trust. After the intervention, the GP practices started to use the telephone interpreter services more often, and to their appreciation, but still, consultations remain complicated due to cultural and language barriers. ‘You could say I was more aware of it. After that training of yours I think oh yes good to think about it and think about it for a while. That’s the most important thing to me’. (GP4) ‘And I think there is a lot of added value in speaking to those people yourself and just getting to know your own patients much better. That you just build a bond a little faster and get to know your patients a little faster. That’s what got me the most’. (GP1) ‘The guidance we received for the extensive refugee history, was also very nice, as we use them as a stepping stone for an introductory meeting’. (GP1) ‘I have become more aware of refugees in our practice. To put yourself in their shoes and how they can experience things. That did help me’. (MHN) ‘Well, I actually think I got just a new perspective, a different view and more attention for refugees in practice. Yes, I think that’s the most important thing. Also tools and practical skills that you can use in practice’. (GP1) The interprofessional collaboration groups were also experienced as helpful. The participants felt that their collaboration with other organizations and professionals had improved because they now knew each other and could easily find each other which improved their communication and contributed to a shared approach with clear division of tasks. ‘By knowing people, by having a network, you indeed have many possibilities…. apparently so many more people are actually reachable than what you think in advance if you don’t visit each other’. (Community worker) ‘Especially, being in a group with the GP for the first time, I found was very different’. (Social worker)
Characteristics of the study population In total, 152 refugee minors from 72 families were enlisted in the four participating general practices, with in total 16,394 patients on their practice list: GP1 (total practice list 7108 patients): 65 refugees (28 families); GP2 (total practice list 4265 patients): 49 refugees (25 families); GP3 (total practice list 2849 patients): 34 refugees (17 families); and GP4 (total practice list 2172 patients): 4 refugees (2 families). Of these refugee minors, 57% were male and 43% female. The refugees originated from 21 different countries. Most parents came from Syria (52.0%) or Eritrea (8.6%). Nearly a quarter (22.7 %) of the refugee minors were born after their parents arrived in the Netherlands. In the five years prior to the intervention (01-09-2014 to 01-09-2019), there were in total of 1023 consultations with 152 refugee minors (1.4 per minor per year), compared to 1677 consultations in the control group (2.2 per minor per year). During the two-year intervention period, 24 refugee minors and 19 control minors moved so their records could not be longer included in the evaluation. No new refugee minors were registered in the participating practices. After the implementation of the intervention, the number of consultations with refugee minors increased to a total of 604 consultations with 116 refugee minors (out of the total of 128 enlisted) (2.6 per minor per year), compared to 561 consultations in 117 controls (out of the 133 enlisted) (2.4 per minor per year). Number of discussions about mental health and P-diagnoses before and after the intervention In the five years before the start of the intervention, significantly fewer discussions of mental health were registered in the refugee group: 16 discussions per 1000 patient-years, compared to 38 discussions per 1000 patient-years in the control group (IRR 2.89 [95% CI 1.43, 6.21], p = 0.0046) (see Table ). In this period before the start of the intervention, also significantly less often a P-diagnosis (psychological distress or problem) was registered in the refugee children: 70 diagnoses/1000 patient-years compared to 128 diagnoses/1000 patient-years in the controls (IRR 1.83 [95% CI 1.30, 2.61], p = 0.0003) (see Table ). This confirmed our first hypothesis. In the two years after the start of the Empowerment programme, the percentage of discussions about mental health within the refugee minor groups increased from 16 to 47 discussions per 1000 patient-years (from 8% of all children in 5 years to 9 % in 2 years) (OR = 1.21 [95% CI 0.52, 2.79], p = 0.66), although it also increased in the control group from 38 to 71 discussions per 1000 patient-years (but in that group, the percentage of children with whom mental health was discussed decreased from 19% to 14%) (OR = 0.71 [95% CI 0.38, 1.33] p = 0.28) (see Table ). During this intervention period, also the number of refugee children who received a P (psychological) diagnoses increased: 22 of the 128 refugees received in total 29 P-diagnoses (115 diagnoses/1000 patient-years), compared to a still higher number of controls: 27 of the 133 controls received a total of 58 P-diagnoses (216 diagnoses/1000 patient-years) (IRR 1.88 [95% CI 1.18, 3.05], p = 0.0046) (see Table and Supplementary Tables S1 and S2 ). So compared with the period before the intervention, there is an increase in the number of mental health discussions and of P-diagnoses in the refugee group, relatively more than in the control group. This is in concordance with our second hypothesis.
In total, 152 refugee minors from 72 families were enlisted in the four participating general practices, with in total 16,394 patients on their practice list: GP1 (total practice list 7108 patients): 65 refugees (28 families); GP2 (total practice list 4265 patients): 49 refugees (25 families); GP3 (total practice list 2849 patients): 34 refugees (17 families); and GP4 (total practice list 2172 patients): 4 refugees (2 families). Of these refugee minors, 57% were male and 43% female. The refugees originated from 21 different countries. Most parents came from Syria (52.0%) or Eritrea (8.6%). Nearly a quarter (22.7 %) of the refugee minors were born after their parents arrived in the Netherlands. In the five years prior to the intervention (01-09-2014 to 01-09-2019), there were in total of 1023 consultations with 152 refugee minors (1.4 per minor per year), compared to 1677 consultations in the control group (2.2 per minor per year). During the two-year intervention period, 24 refugee minors and 19 control minors moved so their records could not be longer included in the evaluation. No new refugee minors were registered in the participating practices. After the implementation of the intervention, the number of consultations with refugee minors increased to a total of 604 consultations with 116 refugee minors (out of the total of 128 enlisted) (2.6 per minor per year), compared to 561 consultations in 117 controls (out of the 133 enlisted) (2.4 per minor per year).
In the five years before the start of the intervention, significantly fewer discussions of mental health were registered in the refugee group: 16 discussions per 1000 patient-years, compared to 38 discussions per 1000 patient-years in the control group (IRR 2.89 [95% CI 1.43, 6.21], p = 0.0046) (see Table ). In this period before the start of the intervention, also significantly less often a P-diagnosis (psychological distress or problem) was registered in the refugee children: 70 diagnoses/1000 patient-years compared to 128 diagnoses/1000 patient-years in the controls (IRR 1.83 [95% CI 1.30, 2.61], p = 0.0003) (see Table ). This confirmed our first hypothesis. In the two years after the start of the Empowerment programme, the percentage of discussions about mental health within the refugee minor groups increased from 16 to 47 discussions per 1000 patient-years (from 8% of all children in 5 years to 9 % in 2 years) (OR = 1.21 [95% CI 0.52, 2.79], p = 0.66), although it also increased in the control group from 38 to 71 discussions per 1000 patient-years (but in that group, the percentage of children with whom mental health was discussed decreased from 19% to 14%) (OR = 0.71 [95% CI 0.38, 1.33] p = 0.28) (see Table ). During this intervention period, also the number of refugee children who received a P (psychological) diagnoses increased: 22 of the 128 refugees received in total 29 P-diagnoses (115 diagnoses/1000 patient-years), compared to a still higher number of controls: 27 of the 133 controls received a total of 58 P-diagnoses (216 diagnoses/1000 patient-years) (IRR 1.88 [95% CI 1.18, 3.05], p = 0.0046) (see Table and Supplementary Tables S1 and S2 ). So compared with the period before the intervention, there is an increase in the number of mental health discussions and of P-diagnoses in the refugee group, relatively more than in the control group. This is in concordance with our second hypothesis.
In order to develop an intervention tailored to the needs of refugees, we interviewed 21 refugee parents (see Table ) from eight different countries about their experiences with mental distress in their children and with healthcare in the Netherlands, in particular their experiences and wishes regarding their GP. These interviews showed that refugees and their children experience a lot of mental distress related to the traumas and difficulties they and their parents have experienced in their country of origin as well as now in the Netherlands. They initially seek support and help from family or religion or by engaging in distracting activities and only as a last resort do they turn for help to their GP. They experience barriers in accessing the GP practice and in discussing psychological problems with their GP. The most important barriers are the lack of an interpreter, the business-like approach of the GP, and the limited time available. Refugee parents also experience shame in discussing psychological problems with outsiders. ‘Look, I have not had an easy husband, he has been in prison and experienced war in our country of origin, his father and brother were murdered in front of his eyes. So my husband was confused, traumatized and unstable. And my children were very vulnerable to that. They could easily go the wrong way. As a mother you then have to be extremely strong’. (Afghan woman) ‘We noticed here in children who are 12 or 13 years old that they are now stuck between the country of origin and the Netherlands. If they behave like the people in the country of origin, then that is not accepted by the Dutch people. If they act like the Dutch, they will not be accepted by their parents and the community from their country of origin. So they are stuck in between. They don’t know what to do, they feel lost on the road’. (Eritrean man) ‘If the general practitioner wants to improve something for foreigners, the first thing is the language. I have been here for more than four years, but I still do not dare to go to the general practitioner immediately, because I do not know how to explain my complaints in ten minutes, that is not enough time’. (Syrian woman) ‘Our culture is closed, we do not want our problems to be known and talk about it. If an adult is stressed, he will not say so and eventually it will get worse. So the culture prevents it’. (Eritrean man)
The interviews with six GPs and four MHNs from the participating practices (see Table ) before the start of the Empowerment programme showed that they were aware mental distress and mental health problems are common in refugee children. However, they experience barriers in discussing these problems. The most important barriers mentioned by professionals were refugees’ limited understanding and mistrust of the Dutch healthcare (system), language barriers, the limited time of professionals, expected cultural differences, and the fact that a physical complaint is often the reason for consultation although the origin might be mental distress. ‘You just feel powerless. you feel there are a lot of issues in the life of this refugee patient, but it is difficult to ask about, given the language barrier, and then this cultural thing you do not know about’. (MHN1) ‘Patients from other countries, I see, they are more often body oriented, do they have more physical complaints, and then you ask about some psychosocial topics and there are many problems, so you think ‘of course you are not sleeping well’. (GP3) Our interviews with GPs and MHNs after the implementation of the programme showed that all three elements of the intervention were equally important: the training was a necessary start, as it raised awareness as well as provided the skills for culturally sensitive communication about mental distress; the booster sessions helped them to solve difficulties they encountered in providing care for refugees with mental distress; and the practice guidance was experienced as equally important as it was easy to consult (it was in the form of a poster) and contained telephone numbers of interpreter services and of locally available support and mental healthcare services. It also contained detailed guidance for the introductory meeting that was advised to build trust and get to know the refugee family, but due to time restraints, these introductory meetings were not held with all refugees. However, when they were held, they were experienced as very effective. Both GPs and MHNs felt that this introductory meeting helped to develop mutual understanding and a relationship of trust. After the intervention, the GP practices started to use the telephone interpreter services more often, and to their appreciation, but still, consultations remain complicated due to cultural and language barriers. ‘You could say I was more aware of it. After that training of yours I think oh yes good to think about it and think about it for a while. That’s the most important thing to me’. (GP4) ‘And I think there is a lot of added value in speaking to those people yourself and just getting to know your own patients much better. That you just build a bond a little faster and get to know your patients a little faster. That’s what got me the most’. (GP1) ‘The guidance we received for the extensive refugee history, was also very nice, as we use them as a stepping stone for an introductory meeting’. (GP1) ‘I have become more aware of refugees in our practice. To put yourself in their shoes and how they can experience things. That did help me’. (MHN) ‘Well, I actually think I got just a new perspective, a different view and more attention for refugees in practice. Yes, I think that’s the most important thing. Also tools and practical skills that you can use in practice’. (GP1) The interprofessional collaboration groups were also experienced as helpful. The participants felt that their collaboration with other organizations and professionals had improved because they now knew each other and could easily find each other which improved their communication and contributed to a shared approach with clear division of tasks. ‘By knowing people, by having a network, you indeed have many possibilities…. apparently so many more people are actually reachable than what you think in advance if you don’t visit each other’. (Community worker) ‘Especially, being in a group with the GP for the first time, I found was very different’. (Social worker)
Main results As we hypothesized, mental health was discussed significantly less often with refugee minors than with minors from the control group. After the implementation of our intervention to improve culturally sensitive person-centred care, the number of mental health discussions and of mental health diagnoses in refugee minors increased substantially. Interviews with refugees as well as GP staff before the start of the programme indicated that stress is a very common problem, and there are barriers to discuss this. The most important barriers mentioned by both parties were the lack of an interpreter and the limited time available. Refugee parents also mentioned the business-like approach of the GP and shame, whereas GP staff added as barriers. Refugees’ mistrust of the Dutch healthcare and expected cultural differences. The Empowerment programme was positively assessed by all professionals involved. All three elements of the intervention in the GP were experienced as equally important: the training and booster sessions, the practice guidance, and introductory meetings, although due to time restraints, these introductory meetings were not held with all refugees. However, when they were held, they were experienced as very effective. Comparison with literature In line with our findings also other studies indicate that both refugees and other migrants and healthcare professionals experience barriers in establishing the necessary trust to discuss sensitive topics like mental distress, due to language and cultural differences as well as time constraints (Fazel et al. , ; Suphanchaimat et al. , ; Loenen et al ., ; Zendedel et al. , ; Hodes & Vostanis, ; Iliadou et al. , ); Fair et al. , ; Jager et al. , ). In addition, refugees experience various barriers to accessing health care (Loenen et al ., ; Van der Boor & White, ; Hodes & Vostanis, ). On top of this, we know from other studies that when experiencing stress, they tend to seek out other forms of support first, before contacting professional help (Teunissen et al. , ; Renkens et al. , ). The physical presentation of stress-related complaints is often outpointed by GPs as a challenge in communication with immigrant groups (Hjörleifsson, Hammer, & Díaz, ). A language barrier is acknowledged as a major challenge, especially for psychosocial consultations (Oehri et al. , ) A professional interpreter must be involved in consultations with refugees, as this is known to be the only way possible mental health issues will be discussed (Krystallidou et al. , ; Zendedel et al. , ). Not only do the skills and attitudes of professionals have to be improved, but also structural barriers have to be removed like limitations put in place by the government, health insurance, or others regarding the availability of interpreter services and sufficient time for professionals (McFarlane, 2021). We did not find any studies where mental health care for refugees in general practice was compared with this care for other groups. However, we know GPs discuss less often other sensitive topics, like sexual and reproductive care, with refugees and other migrants than with non-migrant patients (Raben & van den Muijsenbergh, ). A review showed that most interventions to improve primary care for refugees focus on upskilling doctors, with a paucity of research exploring the involvement of other healthcare members (Iqbal et al. , ). Our intervention involved other healthcare professionals as well as an interprofessional team, as was recommended in the review (Iqbal et al. , ). The importance of collaboration with a local interprofessional team was also pointed out by other researchers, as the intense health needs of refugees require an integrated community-based primary healthcare approach (McMurray et al. , ) In other fields of primary care – midwifery care (Fair et al. , ) and dietetic care (Jager et al. , ) – training in culturally sensitive person-centred care and in particular in cross-cultural communication was also evaluated as positive. However, to prove the positive effect of training in cultural competency on patient outcomes (e.g. in the mental health of refugee minors), more systematic and large-scale development and evaluation of such training are required, including assessment of real-life behaviour of professionals and the experiences of patients (Jager et al. , ). To our knowledge, the other elements of our intervention (the practice guidance as well as the introductory meeting with new refugee patients) have never been studied before. However, GP staff frequently mention difficulties in finding practical information on interpreter services or support organizations during their consultations (Papadakaki et al. , ; Teunissen et al. , ). Our guidance was designed to support professionals in this, and it was experienced as helpful. The introductory meetings were aimed at increasing trust, which also in other studies is proven to be crucial for effective communication (Van den Muijsenbergh et al. , ). A person-centred culturally sensitive approach in general has proven to create more trust in the healthcare professional and thus is likely to improve the discussion about sensitive issues with migrants as well the effectiveness of care (Betancourt, ; Renzaho et al. , ; Seeleman, ; Ahmed et al. , ; Ahmed et al. , ). In order to provide culturally sensitive person-centred care, GPs will need sufficient time, as is pointed out by WHO (WHO & Unicef, ). Therefore, we are pleased that Dutch health insurers will enable GPs to spend more time on their patients from 2024 (LHV, 2023). Strengths and limitations The stepped wedge design that is used is complex, but it is a strong design because the participants are both control and intervention (Zhan, ). In a stepped wedge design, the sample size could be smaller than in a typical cluster trial. We had a sample size of N = 152, which is a strength for the validity of this study. A disadvantage of the stepped wedge design is that it has a longer trial duration (3–5 years). So, the limited time (two years) we had for this study may be a limitation for the validity, especially as the implementation of the intervention programme was hampered by the COVID-19 pandemic and its restrictions on meetings such as training sessions. The COVID-19 pandemic also made the GPs busier but at the same time resulted in fewer consultations. Our finding that despite this, the number of discussions on mental health increased in the refugee population is an indication that our intervention supported GPs and MHNs in improving care for this group. On the other hand, as mental distress increased during the COVID-19 pandemic, specifically in refugees (Padilla et al. , ), the increase in discussions on mental health after our intervention could also, at least partially, be caused by an increase of mental distress. Keeping patient records is time-consuming for GPs. During the data collection, we saw that patient records were not always complete. This is also reflected in the variable ‘unknown’ in the results of, for example, the diagnosis. So, mental health problems may be discussed by the GP but remain invisible in the patient records. Recommendations For future research: In this study, we focus on GPs discussing mental health problems in refugee minors, as this is the starting point for treatment or guidance. Our intervention seems to improve the number of these discussions; however, of course, the ultimate aim of our intervention is to improve the well-being of refugee youth and their families. Further research is needed to see whether or not this would be the case. For GP practices: Enable the provision of person-centred culturally sensitive care by: Organizing access to (telephone) interpreter services Allowing time for introductory meetings, prolonged consultations, and interprofessional collaboration meetings as well as post-graduate education on person-centred culturally sensitive care Providing easy-to-understand multilingual information on practice organization and on health promotion issues Involving migrants in the assessment and development of practice organization and procedures (O’Reilly-deBrún 2017, MacFarlane et al. , ) For GPs and mental health practice nurses: Invest in trust by getting to know your patients by arranging an extensive introductory meeting with patients when registering with the practice. This meeting should address not only physical but also psychosocial aspects, language, and living circumstances. Besides it should explain how the healthcare systems work and how all staff are bound to acknowledge patient confidentiality. Involve professional interpreters in your consultations. Get to know and work together with other organizations and services that can support refugees (youth). Be aware of possible shame or stigma surrounding mental health issues, but do ask about mental distress by normalizing it, explaining how the body and mind react to stressors.
As we hypothesized, mental health was discussed significantly less often with refugee minors than with minors from the control group. After the implementation of our intervention to improve culturally sensitive person-centred care, the number of mental health discussions and of mental health diagnoses in refugee minors increased substantially. Interviews with refugees as well as GP staff before the start of the programme indicated that stress is a very common problem, and there are barriers to discuss this. The most important barriers mentioned by both parties were the lack of an interpreter and the limited time available. Refugee parents also mentioned the business-like approach of the GP and shame, whereas GP staff added as barriers. Refugees’ mistrust of the Dutch healthcare and expected cultural differences. The Empowerment programme was positively assessed by all professionals involved. All three elements of the intervention in the GP were experienced as equally important: the training and booster sessions, the practice guidance, and introductory meetings, although due to time restraints, these introductory meetings were not held with all refugees. However, when they were held, they were experienced as very effective.
In line with our findings also other studies indicate that both refugees and other migrants and healthcare professionals experience barriers in establishing the necessary trust to discuss sensitive topics like mental distress, due to language and cultural differences as well as time constraints (Fazel et al. , ; Suphanchaimat et al. , ; Loenen et al ., ; Zendedel et al. , ; Hodes & Vostanis, ; Iliadou et al. , ); Fair et al. , ; Jager et al. , ). In addition, refugees experience various barriers to accessing health care (Loenen et al ., ; Van der Boor & White, ; Hodes & Vostanis, ). On top of this, we know from other studies that when experiencing stress, they tend to seek out other forms of support first, before contacting professional help (Teunissen et al. , ; Renkens et al. , ). The physical presentation of stress-related complaints is often outpointed by GPs as a challenge in communication with immigrant groups (Hjörleifsson, Hammer, & Díaz, ). A language barrier is acknowledged as a major challenge, especially for psychosocial consultations (Oehri et al. , ) A professional interpreter must be involved in consultations with refugees, as this is known to be the only way possible mental health issues will be discussed (Krystallidou et al. , ; Zendedel et al. , ). Not only do the skills and attitudes of professionals have to be improved, but also structural barriers have to be removed like limitations put in place by the government, health insurance, or others regarding the availability of interpreter services and sufficient time for professionals (McFarlane, 2021). We did not find any studies where mental health care for refugees in general practice was compared with this care for other groups. However, we know GPs discuss less often other sensitive topics, like sexual and reproductive care, with refugees and other migrants than with non-migrant patients (Raben & van den Muijsenbergh, ). A review showed that most interventions to improve primary care for refugees focus on upskilling doctors, with a paucity of research exploring the involvement of other healthcare members (Iqbal et al. , ). Our intervention involved other healthcare professionals as well as an interprofessional team, as was recommended in the review (Iqbal et al. , ). The importance of collaboration with a local interprofessional team was also pointed out by other researchers, as the intense health needs of refugees require an integrated community-based primary healthcare approach (McMurray et al. , ) In other fields of primary care – midwifery care (Fair et al. , ) and dietetic care (Jager et al. , ) – training in culturally sensitive person-centred care and in particular in cross-cultural communication was also evaluated as positive. However, to prove the positive effect of training in cultural competency on patient outcomes (e.g. in the mental health of refugee minors), more systematic and large-scale development and evaluation of such training are required, including assessment of real-life behaviour of professionals and the experiences of patients (Jager et al. , ). To our knowledge, the other elements of our intervention (the practice guidance as well as the introductory meeting with new refugee patients) have never been studied before. However, GP staff frequently mention difficulties in finding practical information on interpreter services or support organizations during their consultations (Papadakaki et al. , ; Teunissen et al. , ). Our guidance was designed to support professionals in this, and it was experienced as helpful. The introductory meetings were aimed at increasing trust, which also in other studies is proven to be crucial for effective communication (Van den Muijsenbergh et al. , ). A person-centred culturally sensitive approach in general has proven to create more trust in the healthcare professional and thus is likely to improve the discussion about sensitive issues with migrants as well the effectiveness of care (Betancourt, ; Renzaho et al. , ; Seeleman, ; Ahmed et al. , ; Ahmed et al. , ). In order to provide culturally sensitive person-centred care, GPs will need sufficient time, as is pointed out by WHO (WHO & Unicef, ). Therefore, we are pleased that Dutch health insurers will enable GPs to spend more time on their patients from 2024 (LHV, 2023).
The stepped wedge design that is used is complex, but it is a strong design because the participants are both control and intervention (Zhan, ). In a stepped wedge design, the sample size could be smaller than in a typical cluster trial. We had a sample size of N = 152, which is a strength for the validity of this study. A disadvantage of the stepped wedge design is that it has a longer trial duration (3–5 years). So, the limited time (two years) we had for this study may be a limitation for the validity, especially as the implementation of the intervention programme was hampered by the COVID-19 pandemic and its restrictions on meetings such as training sessions. The COVID-19 pandemic also made the GPs busier but at the same time resulted in fewer consultations. Our finding that despite this, the number of discussions on mental health increased in the refugee population is an indication that our intervention supported GPs and MHNs in improving care for this group. On the other hand, as mental distress increased during the COVID-19 pandemic, specifically in refugees (Padilla et al. , ), the increase in discussions on mental health after our intervention could also, at least partially, be caused by an increase of mental distress. Keeping patient records is time-consuming for GPs. During the data collection, we saw that patient records were not always complete. This is also reflected in the variable ‘unknown’ in the results of, for example, the diagnosis. So, mental health problems may be discussed by the GP but remain invisible in the patient records.
For future research: In this study, we focus on GPs discussing mental health problems in refugee minors, as this is the starting point for treatment or guidance. Our intervention seems to improve the number of these discussions; however, of course, the ultimate aim of our intervention is to improve the well-being of refugee youth and their families. Further research is needed to see whether or not this would be the case. For GP practices: Enable the provision of person-centred culturally sensitive care by: Organizing access to (telephone) interpreter services Allowing time for introductory meetings, prolonged consultations, and interprofessional collaboration meetings as well as post-graduate education on person-centred culturally sensitive care Providing easy-to-understand multilingual information on practice organization and on health promotion issues Involving migrants in the assessment and development of practice organization and procedures (O’Reilly-deBrún 2017, MacFarlane et al. , ) For GPs and mental health practice nurses: Invest in trust by getting to know your patients by arranging an extensive introductory meeting with patients when registering with the practice. This meeting should address not only physical but also psychosocial aspects, language, and living circumstances. Besides it should explain how the healthcare systems work and how all staff are bound to acknowledge patient confidentiality. Involve professional interpreters in your consultations. Get to know and work together with other organizations and services that can support refugees (youth). Be aware of possible shame or stigma surrounding mental health issues, but do ask about mental distress by normalizing it, explaining how the body and mind react to stressors.
Person-centred culturally sensitive care in general practices, including an introductory meeting with refugees in combination with interprofessional collaboration regarding the mental health of refugee minors indeed results in more discussions of mental health problems with refugee minors in general practices. Such an approach is assessed positively by all involved and is therefore recommended for all general practices.
Çinar et al. supplementary material 1 Çinar et al. supplementary material Çinar et al. supplementary material 2 Çinar et al. supplementary material
|
Was hat der Hausarzt gegen COVID-19 in der Hand? | 4dfb5582-50c4-40fe-9fa2-68ff4c6dfa6e | 8990670 | Family Medicine[mh] | Die Therapien mit nachgewiesenem Nutzen, die ambulanten Patienten in der Frühphase der Erkrankung gegeben werden können, umfassen zwei Klassen. Es handelt sich entweder um neutralisierende monoklonale Antikörper (mAb) gegen das virale Spike-Protein, die für eine passive Immunisierung sorgen, oder um Medikamente mit antiviraler Aktivität gegen SARS-CoV-2. Das Ziel beider Therapieansätze ist, durch den frühzeitigen Einsatz schwere COVID-19-Verläufe bei gefährdeten Patienten abzuwenden. Neutralisierende mAb Die Empfehlungen zur Gabe neutralisierender mAb gelten für alle nicht oder unvollständig geimpften Patienten mit Risikofaktoren und alle vollständig geimpften Patienten mit V. a. unzureichendes Impfansprechen, insbesondere für schwer Immunsupprimierte. Bei der Auswahl ist das Risiko einer Infektion mit der Omikron-Variante zu berücksichtigen. Die Therapie sollte nach PCR-Nachweis beginnen, im Einzelfall ist auch ein Schnelltest ausreichend. Sotrovimab (Xevudy®): Zulassung für COVID-19-Patienten über 12 Jahre und 40 kg, ohne O 2 -Supplementation und mit Risikofaktoren für einen schweren COVID-19-Verlauf innerhalb von fünf Tagen nach Symptombeginn. In einer randomisierten kontrollierten Studie führte Sotrovimab zu einer 79%igen relativen Risikoreduktion für stationäre Aufnahme oder Tod. Wirksamkeit gegen Omikron erhalten, gegen den Subtyp BA.2 nach neuesten Labordaten vermutlich aber nicht ausreichend. Casirivimab/Imdevimab (REGN-COV2): Zulassung wie Sotrovimab, aber mit Therapiebeginn bis sieben Tage nach Krankheitsausbruch; zusätzlich Zulassung zur Post- und Präexpositionsprophylaxe (PEP, PrEP). Die Therapie reduziert die Viruslast und die Notwendigkeit medizinischer Versorgung, die PEP senkt die Infektions- und Erkrankungsrate nach Haushaltskontakt. Achtung: Casirivimab/Imdevimab ist bei Omikron nicht wirksam. Tixagevimab/Cilgavimab (Evusheld®): Die Kombination, die intramuskulär appliziert wird, besitzt eine Sonderzulassung in den USA, jedoch keine Zulassung in der EU; Einsatz zur PrEP kürzlich in der EU zugelassen. Wirksamkeit bei Omikron-Subtyp BA.1 wahrscheinlich etwas reduziert. Antivirale Medikamente Nirmatrelvir/Ritonavir (Paxlovid®): Seit Januar dieses Jahres besteht für die Kombination aus Inhibitor der viralen Protease und Booster eine bedingte Zulassung für Erwachsene, die keinen zusätzlichen Sauerstoff benötigen und ein erhöhtes Risiko für einen schweren COVID-19-Verlauf haben. Bei Therapiebeginn in den ersten drei Krankheitstagen wurde die Rate von Hospitalisierung oder Tod bis Tag 28 relativ um 89%, beim Einsatz in den ersten fünf Tagen um 88% gesenkt. Für COVRIIN ist Nirmatrelvir/Ritonavir eine Therapiealternative in der Frühphase, falls neutralisierende mAb keine Option darstellen. Infrage kommen laut COVRIIN Ungeimpfte und unvollständig Geimpfte mit mindestens einem Risikofaktor und Patienten mit hoher Wahrscheinlichkeit für ein Impfversagen. Vorsicht ist geboten wegen möglicher Interaktionen mit Kotherapien. Molnupiravir (Lagevrio®): Der Wirkstoff, der die Replikation der viralen RNA behindert, kann derzeit nur in einem individuellen Heilversuch gegeben werden. Er kommt infrage für Erwachsene mit Risiko für einen schweren Verlauf und ohne O 2 -Bedarf innerhalb von fünf Tagen nach Symptombeginn. Das Risiko für Hospitalisierung oder Tod wurde in einer Studie um relative 30% gesenkt; Molnupiravir ist damit sowohl mAb als auch Nirmatrelvir/Ritonavir unterlegen. Wirksamkeit gegen Omikron ist wahrscheinlich. Remdesivir (Veklury®): Das Virustatikum ist inzwischen auch für ambulante Patienten mit COVID-19-Pneumonie (> 12 Jahre) zugelassen, sofern sie ein erhöhtes Risiko für einen schweren Verlauf haben. Bei frühzeitiger Gabe wurde das Risiko für Hospitalisierung oder Tod um 87% reduziert. COVRIIN bewertet den Wirkstoff als Therapiealternative, falls neutralisierende mAb oder Nirmatrelvir/Ritonavir keine Option darstellen. Die Therapie soll maximal sieben Tage nach Symptomausbruch begonnen werden. Remdesivir wirkt wahrscheinlich auch gegen Omikron.
Die Empfehlungen zur Gabe neutralisierender mAb gelten für alle nicht oder unvollständig geimpften Patienten mit Risikofaktoren und alle vollständig geimpften Patienten mit V. a. unzureichendes Impfansprechen, insbesondere für schwer Immunsupprimierte. Bei der Auswahl ist das Risiko einer Infektion mit der Omikron-Variante zu berücksichtigen. Die Therapie sollte nach PCR-Nachweis beginnen, im Einzelfall ist auch ein Schnelltest ausreichend. Sotrovimab (Xevudy®): Zulassung für COVID-19-Patienten über 12 Jahre und 40 kg, ohne O 2 -Supplementation und mit Risikofaktoren für einen schweren COVID-19-Verlauf innerhalb von fünf Tagen nach Symptombeginn. In einer randomisierten kontrollierten Studie führte Sotrovimab zu einer 79%igen relativen Risikoreduktion für stationäre Aufnahme oder Tod. Wirksamkeit gegen Omikron erhalten, gegen den Subtyp BA.2 nach neuesten Labordaten vermutlich aber nicht ausreichend. Casirivimab/Imdevimab (REGN-COV2): Zulassung wie Sotrovimab, aber mit Therapiebeginn bis sieben Tage nach Krankheitsausbruch; zusätzlich Zulassung zur Post- und Präexpositionsprophylaxe (PEP, PrEP). Die Therapie reduziert die Viruslast und die Notwendigkeit medizinischer Versorgung, die PEP senkt die Infektions- und Erkrankungsrate nach Haushaltskontakt. Achtung: Casirivimab/Imdevimab ist bei Omikron nicht wirksam. Tixagevimab/Cilgavimab (Evusheld®): Die Kombination, die intramuskulär appliziert wird, besitzt eine Sonderzulassung in den USA, jedoch keine Zulassung in der EU; Einsatz zur PrEP kürzlich in der EU zugelassen. Wirksamkeit bei Omikron-Subtyp BA.1 wahrscheinlich etwas reduziert.
Nirmatrelvir/Ritonavir (Paxlovid®): Seit Januar dieses Jahres besteht für die Kombination aus Inhibitor der viralen Protease und Booster eine bedingte Zulassung für Erwachsene, die keinen zusätzlichen Sauerstoff benötigen und ein erhöhtes Risiko für einen schweren COVID-19-Verlauf haben. Bei Therapiebeginn in den ersten drei Krankheitstagen wurde die Rate von Hospitalisierung oder Tod bis Tag 28 relativ um 89%, beim Einsatz in den ersten fünf Tagen um 88% gesenkt. Für COVRIIN ist Nirmatrelvir/Ritonavir eine Therapiealternative in der Frühphase, falls neutralisierende mAb keine Option darstellen. Infrage kommen laut COVRIIN Ungeimpfte und unvollständig Geimpfte mit mindestens einem Risikofaktor und Patienten mit hoher Wahrscheinlichkeit für ein Impfversagen. Vorsicht ist geboten wegen möglicher Interaktionen mit Kotherapien. Molnupiravir (Lagevrio®): Der Wirkstoff, der die Replikation der viralen RNA behindert, kann derzeit nur in einem individuellen Heilversuch gegeben werden. Er kommt infrage für Erwachsene mit Risiko für einen schweren Verlauf und ohne O 2 -Bedarf innerhalb von fünf Tagen nach Symptombeginn. Das Risiko für Hospitalisierung oder Tod wurde in einer Studie um relative 30% gesenkt; Molnupiravir ist damit sowohl mAb als auch Nirmatrelvir/Ritonavir unterlegen. Wirksamkeit gegen Omikron ist wahrscheinlich. Remdesivir (Veklury®): Das Virustatikum ist inzwischen auch für ambulante Patienten mit COVID-19-Pneumonie (> 12 Jahre) zugelassen, sofern sie ein erhöhtes Risiko für einen schweren Verlauf haben. Bei frühzeitiger Gabe wurde das Risiko für Hospitalisierung oder Tod um 87% reduziert. COVRIIN bewertet den Wirkstoff als Therapiealternative, falls neutralisierende mAb oder Nirmatrelvir/Ritonavir keine Option darstellen. Die Therapie soll maximal sieben Tage nach Symptomausbruch begonnen werden. Remdesivir wirkt wahrscheinlich auch gegen Omikron.
Eine Substanzklasse, die zeitweise als mögliche Covid-Therapie für ambulante Patienten viel Aufmerksamkeit erhalten hat, findet sich in der zweiten Tabelle: Die hochdosierten inhalativen Glukokortikoide Budesonid und Ciclesonid gehören laut COVRIIN zu den Wirkstoffen ohne bewiesenen Nutzen. Zwar wurden in zwei offenen Studien (STOIC, PRINCIPLE) positive Effekte beobachtet (leichte Reduktion von notfallmäßigen Vorstellungen oder Krankenhausaufnahmen, leicht reduzierte Zeit bis zur Genesung). In der einzigen verblindeten placebokontrollierten Studie wurde der primäre Endpunkt (Verschwinden der Covid-Symptome) jedoch verfehlt. Die COVRIIN-Experten betrachten die Datenlage daher als unzureichend für eine Empfehlung. Nicht geteilt wird diese Einschätzung von der DEGAM. In deren COVID-19-Leitlinie für Hausärzte und Hausärztinnen heißt es: "Patientinnen und Patienten mit SARS-CoV-2-Infektion und Risiko für einen schweren Verlauf kann eine Budesonid-Inhalation (2 × 800 μg/d für 7-14 Tage) zur Senkung dieses Risikos angeboten werden (off-label)." Die Empfehlung wird allerdings von den anderen an der Leitlinie beteiligten Gesellschaften nicht unterstützt.
Wegen des erhöhten Risikos für venöse Thromboembolien (VTE) wird für COVID-19-Patienten, die stationär behandelt werden, während des gesamten Krankenhausaufenthalts eine prophylaktische Antikoagulation mit niedermolekularem Heparin empfohlen. Bei nicht intensivpflichtigen Patienten mit erhöhtem VTE-Risiko (etwa aufgrund schwerer Adipositas oder VTE in der Anamnese) wird sogar geraten, frühzeitig eine therapeutische Antikoagulation in Betracht zu ziehen. Nach Entlassung aus dem Krankenhaus soll die Antikoagulation nur in Einzelfällen (z. B. anhaltende Immobilität plus geringes Blutungsrisiko) fortgeführt werden. Für ambulante Patienten gibt es dagegen keine Empfehlung für eine routinemäßige Antikoagulation. In der zweiten Phase von COVID-19 wird das Krankheitsgeschehen v. a. durch die Hyperinflammation bestimmt. Für deren Behandlung stehen heute drei Substanzklassen mit nachgewiesenem Nutzen zur Verfügung: Dexamethason ist laut COVRIIN indiziert bei jeder Form einer neu aufgetretenen oder sich verschlechternden O 2 -Pflichtigkeit, die Zulassung dafür besteht bei Patienten ab 12 Jahren. Kommt es trotzdem zum raschen Progress von Pneumonie und Hypoxämie, kann die Hinzunahme eines Interleukin-6-Rezeptor-Antagonisten die Prognose verbessern. Eine Zulassung hierfür hat Tocilizumab; die Behandlung mit Sarilumab erfolgt off-label. Ebenfalls off-label ist die Behandlung mit den Januskinase-Inhibitoren Baricitinib und Tofacitinib; sie ist v. a. für den frühzeitigen Einsatz bei Patienten mit Low- oder High-Flow-O 2 -Bedarf in Kombination mit Dexamethason vorgesehen. Quelle: Medikamentöse Therapie bei COVID-19 mit Bewertung durch die Fachgruppe COVRIIN beim Robert Koch-Institut (Stand 7. März 2022)
|
The tumour histopathology “glossary” for AI developers | 7c24a6f1-d231-4ab0-840e-919dc1f4d0ab | 11756763 | Anatomy[mh] | Histopathology—the microscopic analysis of tissue samples to diagnose and study diseases—is the mainstay of cancer diagnosis, and most clinical practices revolve around expert human pathologists examining very thin tissue slices (“sections”) mounted on glass slides under traditional light microscopes. The last decades have experienced a huge rise in the application of bioinformatics, artificial intelligence (AI), and machine learning (ML) in cancer research . A plethora of AI tools have been developed specifically for histopathology, which aim to improve its diagnostic reliability and accuracy . Tissue, i.e., the sum and interplay of cells, and cellular morphologies are the key variables in histopathology and are the foundation for any supervised modelling attempt. Thus, we believe that successful AI development requires an understanding of tumour histology. Building accurate AI models for medicine is an interdisciplinary task and requires a complement of different expertise, i.e., computational and mathematical skills combined with clinical knowledge. This is often hampered by computer scientists and biologists/expert healthcare professionals not physically working in the same environment, where they cannot easily access the other’s expertise. The lack of interdisciplinary communication results in significant inefficiencies and misunderstandings, leading many AI models to remain prototypes rather than being integrated into clinical practice . In light of the rise of AI in the clinic, teaching medics AI basics has become essential ; here, we propose that a similarly reciprocal understanding of biology is important for AI developers to build better models. Recent advancements in AI research have significantly automated a range of diagnostic (defining disease), predictive (predicting the response to a certain treatment ), and prognostic (stratifying patients at risk and determining outcome/patient prognosis) tasks in oncology . However, these models often operate as “black boxes,” providing little transparency regarding their decision-making processes. Physicians, on the other hand, base their diagnoses on well-defined biological features and established criteria , especially in the case of histopathologists, whose cornerstone is tissue and cell morphology . Therefore, the interpretability of both the design and the results of AI-based methods are crucial for gaining the trust and acceptance of the medical community and accelerating the clinical implementation of in-silico approaches. A good level of understanding of tumour biology instructs AI developers to incorporate relevant biological features into their computational models . This understanding not only improves the accuracy and relevance of the models but also facilitates the presentation of results in a manner that physicians can validate and understand. For example, a study on early-stage oestrogen receptor-positive (ER+) breast cancer demonstrated how considering relevant biological features is crucial in the development of AI methods for survival prediction . The method described in this work leveraged understanding of nuclear pleomorphism (variance in the appearance of the cell nucleus), which is a crucial factor in breast cancer grading. This article aims to bridge the gap between the AI development and the translation to routine clinical application by emphasising the importance of relevant biological knowledge, which would be helpful in enhancing model interpretability and the subsequent clinical validation. Only solid biological understanding would enable modellers to define sets of relevant features and implement their morphological properties into algorithms. While recently published literature aims at bestowing (cancer) healthcare professionals with expertise on AI, e.g., developing image analysis and modelling skills for clinicians , the opposite—giving AI developers an understanding of cancer histopathology fundamentals—is rare. Thus, in this work we introduce some of the essential concepts of tumour histopathology—the most frequent cell types, the concepts of “neoplasia,” “tumours,” and the tumour microenvironment (TME). We also illustrate routine histopathology protocols and special stainings that provide the visual representation of the above concepts. In order to model disease, firstly, a solid understanding of cell types, their physiological function, overall architecture and interplay with other cells is necessary. Parameters for image analysis and neural network training are best derived by applying knowledge of their defining morphology and distinct, if not unique, individual features (the common proverb of “the eyes can’t see what the mind doesn’t know” applies). In the first section, we introduce different cell types and the concept of “neoplasia.” Morphological diversities in cell types and tissues Most human cells consist of nucleus and cytoplasm , both of which are organised into different compartments and organelles and surrounded by the cell membrane. The size of the nucleus exhibits a considerate amount of variability. Normal cells mostly display smooth nuclear contours and smaller size. Cancerous cells, on the other hand, tend to exhibit larger and pleomorphic (i.e., bizarre looking) nuclei with a prominent nucleolus (the spherical site for ribosome biogenesis). Further, there is variation in nuclear shape among different cell types. Fibroblasts, which are key components of connective tissue, have a spindle shape, whereas epithelial cells tend to be more round or oval . Second, the cytoplasm varies significantly in size and composition. Eosinophilic granulocytes typically feature a bilobed (two-lobed, spectacle-shaped) nucleus, while macrophages can be recognised by their large cytoplasm . Overall, standard cellular morphology reveals distinct sets of features to build AI models upon for cell phenotyping. Usually, these AI models consist of 2 sub-models, one for cell segmentation and the other for cell classification using the segmentation results. Recently, substantial progress has been made to develop unified models for cell segmentation and classification simultaneously. However, cell phenotyping still faces significant challenges, which include but are not limited to the scarcity of annotated large scale data sets, the significant morphological heterogeneity within cell types, and the complex spatial relationships between cell types and their microenvironment . Human tissues, i.e., functional units of synergistically working cells, are composed of collections of cells that in a non-diseased state have an ordered arrangement in space. Roughly, tissue can be subsumed into 2 major compartments; parenchyma, i.e., the functional part composed of specialised cells, and stroma, i.e., the supporting part, mainly connective tissue, extracellular matrix and (micro)vessels. While stroma is morphologically similar across tissue types, the architecture of the parenchyma can have drastic differences. As an example, breast parenchyma consists of lobules and ducts for lactation, whereas parenchyma in the heart is mainly cardiac muscle. In short, tissue function defines the composition of the parenchyma and vice versa. What are “tumours”? “Tumour” (Latin for “swelling”) is an ill-defined term, in principle designating an increase in tissue volume. It refers to a neoplastic process (“neoplasia” being the abnormal and excessive growth of cells and tissue), whose biological “potential” is in most cases dichotomously classified as either benign (localised without metastatic potential, e.g., a minute hyperplastic polyp in the colon) or malignant (invading neighbouring tissue and/or moving to distant organs, e.g., colorectal cancer). The cells of origin for a neoplasia can be classed as epithelial, lymphoid (blood cells), or mesenchymal (connective tissue). Understandably, a large number of AI models focus on the most frequent cancer types , which are epithelial in origin and solid, i.e., mass-forming. Non-solid neoplasia are, for instance, cancers of the blood system, which are not localised and not confined to a single organ, e.g., leukemia where neoplastic cells are in circulation in the blood. In the following, we concentrate on solid neoplasia, due to its epidemiologic relevance and localised anatomy. Histologically, solid neoplasia is composed of the tumour parenchyma, i.e., the neoplastic cells themselves, and the tumour stroma . The tumour stroma has gained more and more attention in cancer research , and is mainly composed of cells of the tumour microenvironment (TME, see below), extracellular matrix (structural and specialised proteins surrounding units of cancer cells), and connective tissue (mainly collagen fibres, fibroblasts, and microvessels). What is “tumour invasion” and its predecessors? Tumour invasion is the critical step to a malignant phenotype in epithelial neoplasia (crudely referred to as “cancer”). Illustrative examples are the malignant transformation of (colorectal) adenomas to carcinomas , loss of basal cells in prostate cancer , loss of myoepithelial cells in breast cancer , or crossing anatomical barriers, e.g., the basal layer in skin squamous carcinoma . For those interested in modelling diagnostic AI support, knowledge of these anatomical structures is critical. The features that are truly unique to invasive cancers should be exploited—for example, the presence of features such as necrosis (dead cells) or an abundance of mitoses (dividing cells) do not imply malignancy, as they can be frequent in benign neoplasms. Invasion is usually the final step in a sequence of malignant transformation. In epithelial tumours (e.g., gastric adenocarcinoma ), it is frequently preceded by, first metaplasia, and second dysplasia (definitions in ). Epithelial dysplasia, i.e., simply put the presence of abnormal but yet not cancerous cells , is a frequent precursor lesion, e.g., in upper and lower gastrointestinal tract, genital tract, skin, and the head and neck region. However, the defining morphological criteria of dysplasia differ by tissue type, and modelling approaches need to take this into account ( and ). For instance, while hyperchromasia (darker staining) is a feature of dysplastic nuclei in the colon, this does not hold true for squamous dysplasia, where architectural disorders and mitoses are more diagnostic. The modeller needs a solid knowledge about which morphological criteria are defining in a respective organ, as concepts are not necessarily transferable. We provide a few examples of such features in . The “face” of malignancy—Morphology and pitfalls Not every malignancy follows the conventional benign-intermediate-malignant trajectory described above, examples being de novo (i.e., not arising from a precursor tumour, such as a skin mole (“naevus”)) malignant melanoma or sarcoma (relatively rare malignant tumours of the soft tissue or bone). Briefly, malignant tumours can be classified according to their cell(s) of origin, e.g., epithelial, mesenchymal (simply put, connective tissue), or lymphoid (see above). It’s reasonable to assume that for developers and the healthcare system, epidemiologically frequent malignant tumours are most relevant, these being malignant epithelial tumours, namely (adeno)carcinomas, e.g., of the prostate, breast, colon (rectum), and lung . Adenocarcinomas are a morphological (and also molecularly distinct) subset of carcinomas (e.g., in contrast to squamous cell carcinomas) and have a typical morphology with gland-like (i.e., circularly arranged with a central lumen) growth with malignant nuclear features such as bizarre cell forms, mitotic figures, and a prominent nucleolus ; however, there are always exceptions and this can make modelling tricky and algorithms incompatible with routine diagnostic practice. Illustrative exceptions are deviant subtypes or atypical cancer growth patterns, such as high-grade prostatic adenocarcinoma with diffuse growth , or invasive-lobular breast cancer with discohesive tumour cells . Not incorporating these features into algorithms is severely limiting and might jeopardise clinical conclusion from a computer-assisted diagnostic setup. Non-small cell lung cancer (NSCLC), second most frequent cancer in both sexes in the United States, has a known multitude of growth patterns and also colorectal cancer can exhibit a very deviant morphology, even if rare . In addition, adenocarcinoma is frequently accompanied by a strong stromal response (“desmoplasia”), which can be seen by collagen deposition and extracellular matrix (ECM) recomposition ( and ). Squamous cell carcinoma, most prominently in the head and neck region , genital tract (cervical cancer, anal cancer) and lung is characterised by keratin “pearls” (whorl-shaped accumulations of keratin, a structural protein), and intercellular bridges (specialised connections between adjacent cells). It has to be kept in mind that even very typical diagnostic features might not be apparent in poorly differentiated tumours which have lost much of their resemblance to the tissue of origin (“dedifferentiation” is the process of losing tissue specialisation, returning to a less specialised state). Mixed carcinomas, e.g., adenosquamous, adeno-neuroendocrine (e.g., mixed neuroendocrine-nonneuroendocrine neoplasms ), add further to the complexity. Lastly, malignancy of other lineages, e.g., non-epithelial shows a considerable amount of variation, too. In particular, malignant melanoma is known for its plethora of “morphological faces” . Composition of, and modelling the tumour microenvironment (TME) The TME is the complex biological ecosystem surrounding a tumour. It is composed of tumour-infiltrating lymphocytes (“TILs”) , (cancer-associated) fibroblasts and (tumour-associated) neutrophils (CAFs, TANs) , macrophages, extracellular matrix, and supportive elements such as microvessels. All of those have gained significant attention due to their prognostic, tumour-promoting or -suppressive impact. While macrophages have traditionally been dichotomously subclassified (i.e., M1- and M2-polarised ), TILs can be more deeply sub-stratified. Modelling the TME needs a comprehensive strategy due to its inherent level of complexity and set of “players.” Nevertheless, modelling using morphology is possible to a certain degree as the “players” generally have distinct cellular and architectural features . In the following, we introduce routine stainings and immunohistochemistry that can facilitate the morphological characterisation of tumours and their TME. Most human cells consist of nucleus and cytoplasm , both of which are organised into different compartments and organelles and surrounded by the cell membrane. The size of the nucleus exhibits a considerate amount of variability. Normal cells mostly display smooth nuclear contours and smaller size. Cancerous cells, on the other hand, tend to exhibit larger and pleomorphic (i.e., bizarre looking) nuclei with a prominent nucleolus (the spherical site for ribosome biogenesis). Further, there is variation in nuclear shape among different cell types. Fibroblasts, which are key components of connective tissue, have a spindle shape, whereas epithelial cells tend to be more round or oval . Second, the cytoplasm varies significantly in size and composition. Eosinophilic granulocytes typically feature a bilobed (two-lobed, spectacle-shaped) nucleus, while macrophages can be recognised by their large cytoplasm . Overall, standard cellular morphology reveals distinct sets of features to build AI models upon for cell phenotyping. Usually, these AI models consist of 2 sub-models, one for cell segmentation and the other for cell classification using the segmentation results. Recently, substantial progress has been made to develop unified models for cell segmentation and classification simultaneously. However, cell phenotyping still faces significant challenges, which include but are not limited to the scarcity of annotated large scale data sets, the significant morphological heterogeneity within cell types, and the complex spatial relationships between cell types and their microenvironment . Human tissues, i.e., functional units of synergistically working cells, are composed of collections of cells that in a non-diseased state have an ordered arrangement in space. Roughly, tissue can be subsumed into 2 major compartments; parenchyma, i.e., the functional part composed of specialised cells, and stroma, i.e., the supporting part, mainly connective tissue, extracellular matrix and (micro)vessels. While stroma is morphologically similar across tissue types, the architecture of the parenchyma can have drastic differences. As an example, breast parenchyma consists of lobules and ducts for lactation, whereas parenchyma in the heart is mainly cardiac muscle. In short, tissue function defines the composition of the parenchyma and vice versa. “Tumour” (Latin for “swelling”) is an ill-defined term, in principle designating an increase in tissue volume. It refers to a neoplastic process (“neoplasia” being the abnormal and excessive growth of cells and tissue), whose biological “potential” is in most cases dichotomously classified as either benign (localised without metastatic potential, e.g., a minute hyperplastic polyp in the colon) or malignant (invading neighbouring tissue and/or moving to distant organs, e.g., colorectal cancer). The cells of origin for a neoplasia can be classed as epithelial, lymphoid (blood cells), or mesenchymal (connective tissue). Understandably, a large number of AI models focus on the most frequent cancer types , which are epithelial in origin and solid, i.e., mass-forming. Non-solid neoplasia are, for instance, cancers of the blood system, which are not localised and not confined to a single organ, e.g., leukemia where neoplastic cells are in circulation in the blood. In the following, we concentrate on solid neoplasia, due to its epidemiologic relevance and localised anatomy. Histologically, solid neoplasia is composed of the tumour parenchyma, i.e., the neoplastic cells themselves, and the tumour stroma . The tumour stroma has gained more and more attention in cancer research , and is mainly composed of cells of the tumour microenvironment (TME, see below), extracellular matrix (structural and specialised proteins surrounding units of cancer cells), and connective tissue (mainly collagen fibres, fibroblasts, and microvessels). Tumour invasion is the critical step to a malignant phenotype in epithelial neoplasia (crudely referred to as “cancer”). Illustrative examples are the malignant transformation of (colorectal) adenomas to carcinomas , loss of basal cells in prostate cancer , loss of myoepithelial cells in breast cancer , or crossing anatomical barriers, e.g., the basal layer in skin squamous carcinoma . For those interested in modelling diagnostic AI support, knowledge of these anatomical structures is critical. The features that are truly unique to invasive cancers should be exploited—for example, the presence of features such as necrosis (dead cells) or an abundance of mitoses (dividing cells) do not imply malignancy, as they can be frequent in benign neoplasms. Invasion is usually the final step in a sequence of malignant transformation. In epithelial tumours (e.g., gastric adenocarcinoma ), it is frequently preceded by, first metaplasia, and second dysplasia (definitions in ). Epithelial dysplasia, i.e., simply put the presence of abnormal but yet not cancerous cells , is a frequent precursor lesion, e.g., in upper and lower gastrointestinal tract, genital tract, skin, and the head and neck region. However, the defining morphological criteria of dysplasia differ by tissue type, and modelling approaches need to take this into account ( and ). For instance, while hyperchromasia (darker staining) is a feature of dysplastic nuclei in the colon, this does not hold true for squamous dysplasia, where architectural disorders and mitoses are more diagnostic. The modeller needs a solid knowledge about which morphological criteria are defining in a respective organ, as concepts are not necessarily transferable. We provide a few examples of such features in . Not every malignancy follows the conventional benign-intermediate-malignant trajectory described above, examples being de novo (i.e., not arising from a precursor tumour, such as a skin mole (“naevus”)) malignant melanoma or sarcoma (relatively rare malignant tumours of the soft tissue or bone). Briefly, malignant tumours can be classified according to their cell(s) of origin, e.g., epithelial, mesenchymal (simply put, connective tissue), or lymphoid (see above). It’s reasonable to assume that for developers and the healthcare system, epidemiologically frequent malignant tumours are most relevant, these being malignant epithelial tumours, namely (adeno)carcinomas, e.g., of the prostate, breast, colon (rectum), and lung . Adenocarcinomas are a morphological (and also molecularly distinct) subset of carcinomas (e.g., in contrast to squamous cell carcinomas) and have a typical morphology with gland-like (i.e., circularly arranged with a central lumen) growth with malignant nuclear features such as bizarre cell forms, mitotic figures, and a prominent nucleolus ; however, there are always exceptions and this can make modelling tricky and algorithms incompatible with routine diagnostic practice. Illustrative exceptions are deviant subtypes or atypical cancer growth patterns, such as high-grade prostatic adenocarcinoma with diffuse growth , or invasive-lobular breast cancer with discohesive tumour cells . Not incorporating these features into algorithms is severely limiting and might jeopardise clinical conclusion from a computer-assisted diagnostic setup. Non-small cell lung cancer (NSCLC), second most frequent cancer in both sexes in the United States, has a known multitude of growth patterns and also colorectal cancer can exhibit a very deviant morphology, even if rare . In addition, adenocarcinoma is frequently accompanied by a strong stromal response (“desmoplasia”), which can be seen by collagen deposition and extracellular matrix (ECM) recomposition ( and ). Squamous cell carcinoma, most prominently in the head and neck region , genital tract (cervical cancer, anal cancer) and lung is characterised by keratin “pearls” (whorl-shaped accumulations of keratin, a structural protein), and intercellular bridges (specialised connections between adjacent cells). It has to be kept in mind that even very typical diagnostic features might not be apparent in poorly differentiated tumours which have lost much of their resemblance to the tissue of origin (“dedifferentiation” is the process of losing tissue specialisation, returning to a less specialised state). Mixed carcinomas, e.g., adenosquamous, adeno-neuroendocrine (e.g., mixed neuroendocrine-nonneuroendocrine neoplasms ), add further to the complexity. Lastly, malignancy of other lineages, e.g., non-epithelial shows a considerable amount of variation, too. In particular, malignant melanoma is known for its plethora of “morphological faces” . The TME is the complex biological ecosystem surrounding a tumour. It is composed of tumour-infiltrating lymphocytes (“TILs”) , (cancer-associated) fibroblasts and (tumour-associated) neutrophils (CAFs, TANs) , macrophages, extracellular matrix, and supportive elements such as microvessels. All of those have gained significant attention due to their prognostic, tumour-promoting or -suppressive impact. While macrophages have traditionally been dichotomously subclassified (i.e., M1- and M2-polarised ), TILs can be more deeply sub-stratified. Modelling the TME needs a comprehensive strategy due to its inherent level of complexity and set of “players.” Nevertheless, modelling using morphology is possible to a certain degree as the “players” generally have distinct cellular and architectural features . In the following, we introduce routine stainings and immunohistochemistry that can facilitate the morphological characterisation of tumours and their TME. Formalin-fixed paraffin-embedded (FFPE) and fresh frozen (FF) tissue Formalin-fixed, paraffin-embedded (FFPE) tissue preservation is the gold standard in histopathology for maintaining tissue integrity. This technique, first introduced by German pathologist Friedrich Blum in 1896 , involves fixing tissue samples in formalin, which preserves their cellular structure by cross-linking proteins. The fixed tissues are then dehydrated, embedded in paraffin wax, and formed into a solid, archivable tissue block. These blocks can be sectioned into thin slices (usually between 2 and 10 μm ), mounted onto glass slides, and stained allowing for microscopic examination. Fresh frozen (FF) tissue on the contrary is immediately preserved by snap-freezing at −196°C in liquid nitrogen (e.g., cancer tissue within 1 h from surgery), without (formalin) fixation. As FF is tissue in its purest form, it is more accurate for genomic analysis than FFPE. However, FFPE preserves better structural integrity and is much more standardised (and affordable) for conventional staining and immunohistochemistry (see below). Despite these advantages, it is crucial to consider that the number of (viable) cells of tissue samples is highly heterogeneous and depends not only on the tissue (and biopsy site) itself but also on how it is retrieved . In general, surgical resection specimens collected in conventional FFPE tissue cassettes (approximately 1 × 1 × 0.5 cm) tend to contain the most cells, while the amount of cells in tissue biopsies is much reduced and limited by several factors such as, for instance, the gauge of the biopsy needle , the anatomical site (e.g., soft tissue is less cellular than bone marrow), and the expertise of the operator . The hematoxylin and eosin (HE) stain The hematoxylin and eosin (HE) stain is the standard staining that has been used in (diagnostic) histopathology for many years . While hematoxylin stains acidic structures, e.g., the nucleus, in different degrees of blue-purple, eosin stains basophilic structures in red-pink, such as the cytoplasm and ECM . This allows for the identification of common cell types and their arrangement in space. The HE stain is cheap, widely used and well accepted in the diagnostic community . A low amount of staining variability is critical for both diagnostics and AI algorithms . Immunohistochemistry (IHC) and immunofluorescence (IF) HE stains allow for a vast amount of tissue interpretability, but in order to address cells and their interplay more granularly, auxiliary information can be obtained from immunohistochemistry (IHC). (Single-plex) IHC has revolutionised histopathology in the 20th century and continues to be an indispensable tool. In principle, IHC detects a target antigen of interest (e.g., membrane transporters, enzymes) by using a chromogen-linked commercial antibody that binds to the antigen of interest and “staining” it a particular colour—usually brown ( and ) . The target of interest could be in tumour cells or in cells of the TME. While IHC uses enzymes as chromogens, immunofluorescence (IF) uses fluorescent dyes (fluorophores) conjugated to antibodies. Advantages of IF are higher resolution and an improved visualisation of co-localised antigens. On the contrary, IHC stainings are long-lasting, cheaper and can be viewed by light microscopy. Some antigens are of particular relevance for diagnostic and (consequently also) AI-developing purposes, namely the proliferation marker Ki-67 or lineage markers such as cytokeratins . “Clusters of differentiation” (CDs) are surface proteins that can help with subtyping cells, particularly immune cells (refer to https://ftp.uniprot.org/pub/databases/uniprot/knowledgebase/complete/docs/cdlist.txt ). This is useful as there is little potential to identify an immune cell subpopulation from an HE alone. IHC staining should be validated extensively, as some antibodies tend to cross-react among different targets leading to lack of specificity and misleading results . Internal on-slide controls can be helpful as a quality control, such as cross-reacting (stromal) cells. Tissue microarrays (TMAs) , i.e., assembling a multitude of usually 0.6 to 1 mm sized tissue cores into a single slide, have allowed for a high-throughput setup . Depending on the tissue type and anatomic site, a TMA core usually captures between a few hundred and a few thousand cells per tissue core . TMAs allow multiple stainings and tissues (from different patients) can be analysed under standardised conditions . The advent of multiplexing Recently, antigen visualisation reached a new era in which we can detect up to hundreds of markers of interest in one section of tissue (recent comprehensive review in ) . Basic panels visualise cells of interest and key anatomic structures , for instance, an epithelial, a pan-leukocyte marker and vessels (e.g., a cytokeratin, CD45, CD31). Multiplexing allows for multiple panels which represent different compartments ( and and ) characterising the TME and its neighbourhoods . This increasing plexity allows for the interrogation of biology in more detail, but results in more complex data sets than HE. This could present significant challenges for AI models, which include the demand for larger training data sets (and thus higher computational power) to reach a similar level of performance seen for HE, higher likelihood of technical artefacts due to more complex wet lab protocols, and the difficulty of jointly modelling multiple cellular markers and their spatial relationships. Further, different markers often co-localise, which introduces additional difficulties to the modelling process . Thus, compared to HE, analysing both IHC and IF data is usually harder and more expensive. A challenge further exacerbated by increasing plexity. Formalin-fixed, paraffin-embedded (FFPE) tissue preservation is the gold standard in histopathology for maintaining tissue integrity. This technique, first introduced by German pathologist Friedrich Blum in 1896 , involves fixing tissue samples in formalin, which preserves their cellular structure by cross-linking proteins. The fixed tissues are then dehydrated, embedded in paraffin wax, and formed into a solid, archivable tissue block. These blocks can be sectioned into thin slices (usually between 2 and 10 μm ), mounted onto glass slides, and stained allowing for microscopic examination. Fresh frozen (FF) tissue on the contrary is immediately preserved by snap-freezing at −196°C in liquid nitrogen (e.g., cancer tissue within 1 h from surgery), without (formalin) fixation. As FF is tissue in its purest form, it is more accurate for genomic analysis than FFPE. However, FFPE preserves better structural integrity and is much more standardised (and affordable) for conventional staining and immunohistochemistry (see below). Despite these advantages, it is crucial to consider that the number of (viable) cells of tissue samples is highly heterogeneous and depends not only on the tissue (and biopsy site) itself but also on how it is retrieved . In general, surgical resection specimens collected in conventional FFPE tissue cassettes (approximately 1 × 1 × 0.5 cm) tend to contain the most cells, while the amount of cells in tissue biopsies is much reduced and limited by several factors such as, for instance, the gauge of the biopsy needle , the anatomical site (e.g., soft tissue is less cellular than bone marrow), and the expertise of the operator . The hematoxylin and eosin (HE) stain is the standard staining that has been used in (diagnostic) histopathology for many years . While hematoxylin stains acidic structures, e.g., the nucleus, in different degrees of blue-purple, eosin stains basophilic structures in red-pink, such as the cytoplasm and ECM . This allows for the identification of common cell types and their arrangement in space. The HE stain is cheap, widely used and well accepted in the diagnostic community . A low amount of staining variability is critical for both diagnostics and AI algorithms . HE stains allow for a vast amount of tissue interpretability, but in order to address cells and their interplay more granularly, auxiliary information can be obtained from immunohistochemistry (IHC). (Single-plex) IHC has revolutionised histopathology in the 20th century and continues to be an indispensable tool. In principle, IHC detects a target antigen of interest (e.g., membrane transporters, enzymes) by using a chromogen-linked commercial antibody that binds to the antigen of interest and “staining” it a particular colour—usually brown ( and ) . The target of interest could be in tumour cells or in cells of the TME. While IHC uses enzymes as chromogens, immunofluorescence (IF) uses fluorescent dyes (fluorophores) conjugated to antibodies. Advantages of IF are higher resolution and an improved visualisation of co-localised antigens. On the contrary, IHC stainings are long-lasting, cheaper and can be viewed by light microscopy. Some antigens are of particular relevance for diagnostic and (consequently also) AI-developing purposes, namely the proliferation marker Ki-67 or lineage markers such as cytokeratins . “Clusters of differentiation” (CDs) are surface proteins that can help with subtyping cells, particularly immune cells (refer to https://ftp.uniprot.org/pub/databases/uniprot/knowledgebase/complete/docs/cdlist.txt ). This is useful as there is little potential to identify an immune cell subpopulation from an HE alone. IHC staining should be validated extensively, as some antibodies tend to cross-react among different targets leading to lack of specificity and misleading results . Internal on-slide controls can be helpful as a quality control, such as cross-reacting (stromal) cells. Tissue microarrays (TMAs) , i.e., assembling a multitude of usually 0.6 to 1 mm sized tissue cores into a single slide, have allowed for a high-throughput setup . Depending on the tissue type and anatomic site, a TMA core usually captures between a few hundred and a few thousand cells per tissue core . TMAs allow multiple stainings and tissues (from different patients) can be analysed under standardised conditions . Recently, antigen visualisation reached a new era in which we can detect up to hundreds of markers of interest in one section of tissue (recent comprehensive review in ) . Basic panels visualise cells of interest and key anatomic structures , for instance, an epithelial, a pan-leukocyte marker and vessels (e.g., a cytokeratin, CD45, CD31). Multiplexing allows for multiple panels which represent different compartments ( and and ) characterising the TME and its neighbourhoods . This increasing plexity allows for the interrogation of biology in more detail, but results in more complex data sets than HE. This could present significant challenges for AI models, which include the demand for larger training data sets (and thus higher computational power) to reach a similar level of performance seen for HE, higher likelihood of technical artefacts due to more complex wet lab protocols, and the difficulty of jointly modelling multiple cellular markers and their spatial relationships. Further, different markers often co-localise, which introduces additional difficulties to the modelling process . Thus, compared to HE, analysing both IHC and IF data is usually harder and more expensive. A challenge further exacerbated by increasing plexity. To be implemented into routine practice, an AI algorithm needs several indispensable properties, i.e., clinical relevance, high accuracy, rapid implementation, fast computation, and last but not least, user-friendliness. Perfect accuracy is desired for pathological diagnostics, such as differentiating between tumour invasion and benign disease; anything less could put patients’ lives at risk. False negatives that lack a prognostic biomarker may lead to reduced therapeutic options. A variety of technical difficulties, such as staining differences, scanner variability, image modalities, and image size hinder the performance of AI models, including their generalisation to different datasets. For these models to achieve robust generalisation across different datasets, several key standardisation approaches are required throughout the imaging and analysis pipeline, such as introducing standardisation of tissue processing, sectioning thickness, reagents, fixation protocols, scanner calibration, and performing stain normalisation . Further, multi-centre validation data sets that are able to represent real-world technical variations could also help in developing and validating more generalised AI models. Aside from the need of standardisation, the AI developer is frequently confronted with the profound problem of lacking in-depth biological, morphological, and structural knowledge. It is our hope that this work enables the developer to leverage biologically relevant features into designing computational models. It is our hope that this work enables the developer to leverage biologically relevant features into designing computational models. Further, this becomes helpful in explaining the output from “black-box” AI models, by correlating the results with known biological features. With a common knowledge level, the design of “pathologist in-the-loop” approaches in training AI models are facilitated. S1 Table A glossary of different “-plasias” with examples. (DOCX) |
A new marker for predicting sentinel lymph node metastasis in early (cT1-2N0) breast cancer: Tumor-infiltrating lymphocytes (TILs) | 952ae391-ddf2-4455-ab6e-d7825d338a24 | 11922523 | Surgical Procedures, Operative[mh] | Breast cancer is one of the most common malignant tumors among women worldwide, and its incidence rate ranks first, with patient prognosis closely related to tumor staging . Therefore, early diagnosis and treatment of breast cancer are crucial. In the era of precision medicine, the treatment of breast cancer is shifting towards a more precise, minimally invasive, and personalized approach. Sentinel lymph node biopsy (SLNB), as a method to predict the status of axillary lymph nodes, has been used in breast cancer since 1994 . The traditional view is that patients with positive sentinel lymph nodes require Axillary Lymph Node Dissection (ALND), however, ALND may lead to lymphedema and functional impairment of the ipsilateral upper limb, which affects the quality of life of patients. In recent years, clinical trials such as Z0011, AMAROS, and the most recent SENOMAC have shown that breast cancer patients with micrometastasis or 1-2 positive SLNs in cT1-3cN0 can forego ALND [ – ]. In addition, some studies have suggested that sentinel lymph node biopsy itself may be an over-treatment for patients with clinically negative axillary lymph nodes . A recent SOUND randomized clinical trial showed that breast cancer patients with negative axillary lymph node ultrasound findings can safely avoid any axillary surgery for cT1 . Therefore, it is of great significance to find early indicators for predicting SLNM in patients with clinically negative axillary lymph nodes. Immune cells in the tumor microenvironment generally include T lymphocytes, B lymphocytes, natural killer cells, tumor-associated macrophages (TAMs), dendritic cells, and myeloid-derived suppressor cells, etc. . TILs are a group of lymphocytes present within and in the stroma of the tumor nest, which can directly reflect the state of the tumor immune microenvironment and play an important role in the occurrence, progression, and control of tumors . It is worth noting that there have been no reports on the relationship between SLNM and TILs in early-stage (cT1-2N0) breast cancer. The application of TILs in early breast cancer provides a new perspective for the formulation of individualized treatment strategies. By detecting the infiltration level of TILs, it is possible to more accurately predict the metastasis of sentinel lymph nodes, thereby enabling the formulation of more personalized treatment plans for patients, avoiding unnecessary ALND, and reducing complications for patients. Therefore, TILs play an important role in the prediction of sentinel lymph node metastasis in early breast cancer, with significant novelty and clinical significance. Through the detection and analysis of TILs, new evidence can be provided for the formulation of individualized treatment strategies, optimizing clinical decision-making, improving treatment outcomes and patient survival rates, while also reducing complications and medical costs. This study aims to explore the value of TILs in predicting SLNM in early-stage (cT1-2N0) breast cancer patients and to provide a new method for preoperative assessment of SLNM status.
2.1. Patients For research purposes, we began collecting patient information on May 1, 2024, and authors had access to information that could identify individual participants during or after data collection. We collected patients who were preoperatively diagnosed with early-stage (cT1-2N0) breast cancer and underwent surgery as first-line treatment at our hospital from January 2022 to December 2023. Inclusion criteria were: (1) female patients with invasive breast cancer confirmed by core needle biopsy pathology before surgery; (2) clinical staging of cT1-T2N0 for breast cancer patients; (3) no anti-cancer treatment of any form before surgery, including neoadjuvant chemotherapy, neoadjuvant endocrine therapy, neoadjuvant targeted therapy, and neoadjuvant radiotherapy; (4) complete clinical and pathological data; (5) the HE-stained sections from the core needle biopsy of the breast tumor were stored in the pathology department of our hospital and were available for TILs counting. Exclusion criteria were: (1) core needle biopsy of the breast tumor only showing in situ carcinoma components, with or without microinvasion; (2) stage IV metastatic breast cancer; (3) bilateral breast cancer; (4) confirmed to have other malignant tumors at the time of breast cancer diagnosis. Based on the inclusion and exclusion criteria, a total of 337 patients were ultimately included. 2.2. Basic data The clinical data of the patients were collected using our hospital’s electronic medical record system, including age, menstrual status, tumor location, and cT stage, among others. IHC was used to detect the expression of ER, PR, HER2, and Ki67 in the biopsy tissue. According to the 2010 ASCO/CAP guidelines , staining of ER and PR at more than 1% was considered positive. HER2 expression was measured according to the 2013 ASCO/CAP guidelines , where HER2 positivity was defined as +++ on IHC or gene amplification on fluorescence in situ hybridization (FISH) for IHC ++. Based on previous studies, a Ki67 > 14% was considered a high proliferation index, and Ki67 ≤ 14% was considered a low proliferation index ( ). All cases were classified into Luminal A (ER+, PR+ /-, HER2-, low Ki67), Luminal B (ER+, PR+/-, HER2+ or ER+, PR+/-, HER2-, high Ki67), HER2 enriched (ER-, PR-, HER2+), and TNBC (ER-, PR-, HER2-) based on the results of IHC staining for ER, PR, HER2, and Ki67. The sentinel lymph node (SLN) was marked with methylene blue dye and removed during surgery. The SLN was sliced into 2mm thick sections, and pathological examination was performed to check for metastasis. SLNM was classified according to previous studies, with tumor diameters > 2mm considered macrometastases, tumor diameters > 0.2 mm and ≤ 2mm, or tumor cells < 200 considered micrometastases. Tumor diameters < 0.2 mm or tumor cells < 200 were considered isolated tumor cells . 2.3. The histopathological evaluation of TILs The histopathological assessment of TILs density in core needle biopsy specimens allows for the categorization of TILs into intratumoral tumor-infiltrating lymphocytes (iTILs) and stromal tumor-infiltrating lymphocytes (sTILs) based on their spatial distribution. iTILs are lymphocytes that are in direct contact with tumor cells and are located within the tumor nest, while sTILs are found within the fibrous stroma of the tumor. Both types have clinical significance. Currently, sTILs are considered to have a higher clinical application value due to their relatively higher quantity, ease of observation and assessment, and higher reproducibility, which is why the International TILs Working Group recommends assessing sTILs . In this study, only sTILs were evaluated, and unless otherwise specified, TILs refer to sTILs in the following text. The Breast Cancer TILs scoring guidelines published by the International Breast Cancer TILs Research Group in 2014 , along with subsequent updates and content supplements in 2017 and 2018 , provide a standard reference for evaluation. According to the scoring guidelines and previous research, a TIL density of ≥ 10% is defined as high TILs, and a TIL density of < 10% is defined as low TILs. Additionally, breast cancers with a TIL density of ≥ 50% are defined as lymphocyte predominant breast cancer (LPBC), and those with a TIL density of < 50% are defined as non-lymphocyte predominant breast cancer (nLPBC) ( ). 2.4. Statistical analysis Statistical analysis was performed using SPSS 17.0 software. Descriptive statistics were represented by frequency and percentage (n, %). The t-test was used to compare the distribution of TILs in different sentinel lymph node metastasis states. The correlation between categorical variables based on clinical and pathological characteristics was assessed using the Pearson chi-square test and Fisher’s exact test. Univariate analysis was used to examine the correlation between each variable and SLNM, and factors with a P-value of less than 0.05 in the univariate analysis were further subjected to multivariate logistic regression analysis to calculate the odds ratio (OR) and the 95% confidence interval (CI), with a P-value of less than 0.05 considered as an independent influencing factor. The ROC curve was used to evaluate the value of TILs density in predicting SLNM. The nomogram was constructed using the “nomogram” function in the R programming language. 2.5. Ethics approval and consent to participate This study complies with the provisions of the Declaration of Helsinki in 2013 and has been granted exemption from informed consent by the Ethics Committee of Weifang People’s Hospital (KYLL20240425-1).
For research purposes, we began collecting patient information on May 1, 2024, and authors had access to information that could identify individual participants during or after data collection. We collected patients who were preoperatively diagnosed with early-stage (cT1-2N0) breast cancer and underwent surgery as first-line treatment at our hospital from January 2022 to December 2023. Inclusion criteria were: (1) female patients with invasive breast cancer confirmed by core needle biopsy pathology before surgery; (2) clinical staging of cT1-T2N0 for breast cancer patients; (3) no anti-cancer treatment of any form before surgery, including neoadjuvant chemotherapy, neoadjuvant endocrine therapy, neoadjuvant targeted therapy, and neoadjuvant radiotherapy; (4) complete clinical and pathological data; (5) the HE-stained sections from the core needle biopsy of the breast tumor were stored in the pathology department of our hospital and were available for TILs counting. Exclusion criteria were: (1) core needle biopsy of the breast tumor only showing in situ carcinoma components, with or without microinvasion; (2) stage IV metastatic breast cancer; (3) bilateral breast cancer; (4) confirmed to have other malignant tumors at the time of breast cancer diagnosis. Based on the inclusion and exclusion criteria, a total of 337 patients were ultimately included.
The clinical data of the patients were collected using our hospital’s electronic medical record system, including age, menstrual status, tumor location, and cT stage, among others. IHC was used to detect the expression of ER, PR, HER2, and Ki67 in the biopsy tissue. According to the 2010 ASCO/CAP guidelines , staining of ER and PR at more than 1% was considered positive. HER2 expression was measured according to the 2013 ASCO/CAP guidelines , where HER2 positivity was defined as +++ on IHC or gene amplification on fluorescence in situ hybridization (FISH) for IHC ++. Based on previous studies, a Ki67 > 14% was considered a high proliferation index, and Ki67 ≤ 14% was considered a low proliferation index ( ). All cases were classified into Luminal A (ER+, PR+ /-, HER2-, low Ki67), Luminal B (ER+, PR+/-, HER2+ or ER+, PR+/-, HER2-, high Ki67), HER2 enriched (ER-, PR-, HER2+), and TNBC (ER-, PR-, HER2-) based on the results of IHC staining for ER, PR, HER2, and Ki67. The sentinel lymph node (SLN) was marked with methylene blue dye and removed during surgery. The SLN was sliced into 2mm thick sections, and pathological examination was performed to check for metastasis. SLNM was classified according to previous studies, with tumor diameters > 2mm considered macrometastases, tumor diameters > 0.2 mm and ≤ 2mm, or tumor cells < 200 considered micrometastases. Tumor diameters < 0.2 mm or tumor cells < 200 were considered isolated tumor cells .
The histopathological assessment of TILs density in core needle biopsy specimens allows for the categorization of TILs into intratumoral tumor-infiltrating lymphocytes (iTILs) and stromal tumor-infiltrating lymphocytes (sTILs) based on their spatial distribution. iTILs are lymphocytes that are in direct contact with tumor cells and are located within the tumor nest, while sTILs are found within the fibrous stroma of the tumor. Both types have clinical significance. Currently, sTILs are considered to have a higher clinical application value due to their relatively higher quantity, ease of observation and assessment, and higher reproducibility, which is why the International TILs Working Group recommends assessing sTILs . In this study, only sTILs were evaluated, and unless otherwise specified, TILs refer to sTILs in the following text. The Breast Cancer TILs scoring guidelines published by the International Breast Cancer TILs Research Group in 2014 , along with subsequent updates and content supplements in 2017 and 2018 , provide a standard reference for evaluation. According to the scoring guidelines and previous research, a TIL density of ≥ 10% is defined as high TILs, and a TIL density of < 10% is defined as low TILs. Additionally, breast cancers with a TIL density of ≥ 50% are defined as lymphocyte predominant breast cancer (LPBC), and those with a TIL density of < 50% are defined as non-lymphocyte predominant breast cancer (nLPBC) ( ).
Statistical analysis was performed using SPSS 17.0 software. Descriptive statistics were represented by frequency and percentage (n, %). The t-test was used to compare the distribution of TILs in different sentinel lymph node metastasis states. The correlation between categorical variables based on clinical and pathological characteristics was assessed using the Pearson chi-square test and Fisher’s exact test. Univariate analysis was used to examine the correlation between each variable and SLNM, and factors with a P-value of less than 0.05 in the univariate analysis were further subjected to multivariate logistic regression analysis to calculate the odds ratio (OR) and the 95% confidence interval (CI), with a P-value of less than 0.05 considered as an independent influencing factor. The ROC curve was used to evaluate the value of TILs density in predicting SLNM. The nomogram was constructed using the “nomogram” function in the R programming language.
This study complies with the provisions of the Declaration of Helsinki in 2013 and has been granted exemption from informed consent by the Ethics Committee of Weifang People’s Hospital (KYLL20240425-1).
3.1. Clinicopathological features The study included a total of 337 patients for analysis, with a median age of 55 years (range 27-81 years). There were 230 postmenopausal patients (68.2%). There were 179 patients with cT1 (53.1%) and 158 patients with cT2 (46.9%). ER positivity was found in 287 cases (85.2%), PR positivity in 271 cases (80.4%), HER2 positivity in 57 cases (16.9%), and high Ki67 expression in 256 cases (76.0%). Androgen Receptor (AR) positivity was noted in 319 cases (94.7%), and P53 positivity in 256 cases (76.0%), with 118 cases (35.0%) having a nuclear grade of III. There were 79 cases of Luminal A (LA) type (23.4%), 213 cases of Luminal B (LB) type (63.2%), 16 cases of HER2-enriched type (4.7%), and 29 cases of triple-negative (TN) type (8.6%). High TILs were present in 283 patients (84.0%), and low TILs in 54 patients (16.0%). There were 132 patients (39.1%) with LPBC and 205 patients (60.9%) with nLPBC. Macrometastasis in the sentinel lymph node occurred in 116 patients (34.4%), while 221 patients (65.6%) did not have macrometastasis or only had micrometastasis in the sentinel lymph node ( ). 3.2. The relationship between clinical and pathological features and SLNM The correlation between clinical and pathological features and SLNM ( ). For all patients, those over 55 years old had a higher rate of SLNM than those 55 years old or younger (P = 0.039); patients with cT2 had a higher rate of SLNM than those with cT1 (P = 0.016); patients with ER positivity had a higher rate of SLNM than those with ER negativity (P = 0.023); patients with AR positivity had a higher rate of SLNM than those with AR negativity (P = 0.040). Additionally, TILs were significantly correlated with SLNM, with patients with nLPBC having a higher rate of SLNM than those with LPBC (P = 0.010); patients with high TILs had a significantly lower rate of SLNM than those with low TILs (P < 0.001). For patients with LA type breast cancer, SLNM was significantly correlated with age (P = 0.048), menstrual status (P = 0.033), and TILs, with patients with high TILs having a significantly lower rate of SLNM than those with low TILs (P = 0.001). For patients with LB type breast cancer, those with nLPBC had a higher rate of SLNM than those with LPBC (P = 0.035); patients with high TILs had a significantly lower rate of SLNM than those with low TILs (P = 0.001). For patients with TNBC, those with high TILs and LPBC had a lower rate of SLNM (P = 0.003, P = 0.034). However, for patients with HER2 enriched breast cancer, there was no significant statistical correlation between clinical and pathological features and SLNM. 3.3. The correlation between clinical and pathological features and TILs We analyzed the correlation between patient TILs and clinical pathological characteristics ( ). When classifying patients into LPBC and nLPBC based on a 50% cutoff value for TILs density, LPBC was significantly associated with the following clinical pathological features: age > 55 years (P = 0.001), amenorrhea (P = 0.023), ER negativity (P < 0.001), PR negativity (P < 0.001), HER2 positivity (P = 0.026), high Ki-67 expression (P < 0.001), AR negativity (P < 0.001), and high nuclear grade (P = 0.026). Upon subgroup analysis, patients with TNBC had a higher proportion of LPBC, followed by those with HER2 enriched breast cancer, while patients with LA and LB types of breast cancer had the smallest proportion of LPBC (P < 0.001) ( ). However, when analyzing with a 10% cutoff for TILs density to differentiate high TILs from low TILs, apart from age (P = 0.012), no other clinical pathological factors showed a significant correlation with TILs. Upon subgroup analysis, there were also no significant differences in TILs among patients with LA, LB, HER2 enriched, and TN types of breast cancer (P = 0.902) ( ). 3.4. Analysis of predictive factors for SLNM We conducted univariate and multivariate analyses of the predictive factors for SLNM, and the results showed that cT staging (P = 0.002, OR = 0.464) and the level of TILs (P < 0.001, OR = 4.549) are independent predictive factors for SLNM ( ) ( ). Upon subgroup analysis ( ), in TNBC, LPBC (P = 0.036, OR = 20.000) is an independent predictive factor for SLNM. In LA and LB types of breast cancer, the level of TILs (P = 0.005, OR = 8.895; P = 0.010, OR = 2.895) is an independent predictive factor for SLNM. Additionally, we used t-tests and box plots to analyze the correlation between TILs density and SLNM. In all patients, the TILs density in those with SLNM was significantly lower than those without SLN metastasis (P = 0.001) ( ). When analyzed by each subtype, only LB type breast cancer (P = 0.014) and TNBC (P < 0.001) showed statistical differences ( and ), while other subtypes showed no significant differences ( and ). The ROC curve was used to assess the value of TILs density in predicting SLNM in early-stage (cT1-2N0) breast cancer. The results showed that the AUC was 0.624 (CI: 0.559-0.689), with a sensitivity of 0.440 and a specificity of 0.783, and the optimal cutoff value was 17.5%, indicating that TILs density has good efficacy in predicting SLNM ( ). Based on the results of the multivariate regression analysis, a nomogram model for SLNM was constructed using the R programming language ( ), where the individual scores (Points) for each variable on the left side of the figure are found by drawing a vertical line upward, and the total score (Total Points) is obtained by summing these individual scores; by drawing a vertical line downward from the total score, the corresponding SLNM rate is found, which represents the predicted probability of SLNM for a specific breast cancer patient.
The study included a total of 337 patients for analysis, with a median age of 55 years (range 27-81 years). There were 230 postmenopausal patients (68.2%). There were 179 patients with cT1 (53.1%) and 158 patients with cT2 (46.9%). ER positivity was found in 287 cases (85.2%), PR positivity in 271 cases (80.4%), HER2 positivity in 57 cases (16.9%), and high Ki67 expression in 256 cases (76.0%). Androgen Receptor (AR) positivity was noted in 319 cases (94.7%), and P53 positivity in 256 cases (76.0%), with 118 cases (35.0%) having a nuclear grade of III. There were 79 cases of Luminal A (LA) type (23.4%), 213 cases of Luminal B (LB) type (63.2%), 16 cases of HER2-enriched type (4.7%), and 29 cases of triple-negative (TN) type (8.6%). High TILs were present in 283 patients (84.0%), and low TILs in 54 patients (16.0%). There were 132 patients (39.1%) with LPBC and 205 patients (60.9%) with nLPBC. Macrometastasis in the sentinel lymph node occurred in 116 patients (34.4%), while 221 patients (65.6%) did not have macrometastasis or only had micrometastasis in the sentinel lymph node ( ).
The correlation between clinical and pathological features and SLNM ( ). For all patients, those over 55 years old had a higher rate of SLNM than those 55 years old or younger (P = 0.039); patients with cT2 had a higher rate of SLNM than those with cT1 (P = 0.016); patients with ER positivity had a higher rate of SLNM than those with ER negativity (P = 0.023); patients with AR positivity had a higher rate of SLNM than those with AR negativity (P = 0.040). Additionally, TILs were significantly correlated with SLNM, with patients with nLPBC having a higher rate of SLNM than those with LPBC (P = 0.010); patients with high TILs had a significantly lower rate of SLNM than those with low TILs (P < 0.001). For patients with LA type breast cancer, SLNM was significantly correlated with age (P = 0.048), menstrual status (P = 0.033), and TILs, with patients with high TILs having a significantly lower rate of SLNM than those with low TILs (P = 0.001). For patients with LB type breast cancer, those with nLPBC had a higher rate of SLNM than those with LPBC (P = 0.035); patients with high TILs had a significantly lower rate of SLNM than those with low TILs (P = 0.001). For patients with TNBC, those with high TILs and LPBC had a lower rate of SLNM (P = 0.003, P = 0.034). However, for patients with HER2 enriched breast cancer, there was no significant statistical correlation between clinical and pathological features and SLNM.
We analyzed the correlation between patient TILs and clinical pathological characteristics ( ). When classifying patients into LPBC and nLPBC based on a 50% cutoff value for TILs density, LPBC was significantly associated with the following clinical pathological features: age > 55 years (P = 0.001), amenorrhea (P = 0.023), ER negativity (P < 0.001), PR negativity (P < 0.001), HER2 positivity (P = 0.026), high Ki-67 expression (P < 0.001), AR negativity (P < 0.001), and high nuclear grade (P = 0.026). Upon subgroup analysis, patients with TNBC had a higher proportion of LPBC, followed by those with HER2 enriched breast cancer, while patients with LA and LB types of breast cancer had the smallest proportion of LPBC (P < 0.001) ( ). However, when analyzing with a 10% cutoff for TILs density to differentiate high TILs from low TILs, apart from age (P = 0.012), no other clinical pathological factors showed a significant correlation with TILs. Upon subgroup analysis, there were also no significant differences in TILs among patients with LA, LB, HER2 enriched, and TN types of breast cancer (P = 0.902) ( ).
We conducted univariate and multivariate analyses of the predictive factors for SLNM, and the results showed that cT staging (P = 0.002, OR = 0.464) and the level of TILs (P < 0.001, OR = 4.549) are independent predictive factors for SLNM ( ) ( ). Upon subgroup analysis ( ), in TNBC, LPBC (P = 0.036, OR = 20.000) is an independent predictive factor for SLNM. In LA and LB types of breast cancer, the level of TILs (P = 0.005, OR = 8.895; P = 0.010, OR = 2.895) is an independent predictive factor for SLNM. Additionally, we used t-tests and box plots to analyze the correlation between TILs density and SLNM. In all patients, the TILs density in those with SLNM was significantly lower than those without SLN metastasis (P = 0.001) ( ). When analyzed by each subtype, only LB type breast cancer (P = 0.014) and TNBC (P < 0.001) showed statistical differences ( and ), while other subtypes showed no significant differences ( and ). The ROC curve was used to assess the value of TILs density in predicting SLNM in early-stage (cT1-2N0) breast cancer. The results showed that the AUC was 0.624 (CI: 0.559-0.689), with a sensitivity of 0.440 and a specificity of 0.783, and the optimal cutoff value was 17.5%, indicating that TILs density has good efficacy in predicting SLNM ( ). Based on the results of the multivariate regression analysis, a nomogram model for SLNM was constructed using the R programming language ( ), where the individual scores (Points) for each variable on the left side of the figure are found by drawing a vertical line upward, and the total score (Total Points) is obtained by summing these individual scores; by drawing a vertical line downward from the total score, the corresponding SLNM rate is found, which represents the predicted probability of SLNM for a specific breast cancer patient.
In the era of precision medicine, breast cancer treatment is shifting towards precision, minimally invasive, and personalized approaches. Currently, SLNB has become the standard method for assessing the status of axillary lymph nodes, and numerous studies have demonstrated the high accuracy and predictive value of sentinel lymph node examination . For patients with negative sentinel lymph node biopsy results, ALND can be safely avoided. However, whether patients with positive sentinel lymph node biopsy can be exempt from ALND has been the subject of recent clinical trials. The results of the ACOSOG Z0011 clinical trial showed that for patients with cT1-2 tumors, positive in 1-2 sentinel lymph nodes, who underwent breast-conserving surgery, received whole-breast radiotherapy, and had no preoperative treatment, ALND could be omitted without significant differences in 10-year recurrence rates and survival rates compared to patients who underwent axillary dissection . For early-stage breast cancer patients with cN0 sentinel lymph node biopsy positivity, the EORTC 10981-22023 AMAROS study suggested that axillary lymph node radiotherapy could be an alternative to ALND, with comparable 10-year recurrence rates and survival rates, and a significant reduction in the incidence of upper limb edema . In addition, the latest results from the SENOMAC trial (NCT02240472) demonstrated that for clinically lymph node-negative T1, T2, or T3 breast cancer patients with 1-2 sentinel lymph node macro-metastases, who received adjuvant systemic therapy and radiotherapy according to national guidelines, omitting complete ALND was safe, with 5-year overall survival rates of 92.9% and 92.0% . However, some studies have shown that sentinel lymph node biopsy itself may represent an over-treatment for patients with clinically negative axillary lymph nodes . A recent SOUND (Sentinel Node vs Observation After Axillary Ultra-Sound) clinical trial, which included 1,405 patients from multiple centers in a randomized controlled trial, showed no significant difference in 5-year DDFS (1.7% vs. 1.6%) and OS (3.0% vs. 2.6%) between the SLNM group and the group without axillary surgery. These results indicate that axillary surgery can be safely avoided for cT1 breast cancer patients with negative axillary ultrasound findings . Therefore, for early-stage breast cancer patients with clinically negative axillary lymph nodes, finding new markers to predict SLNM is a hot topic in current research. Currently, many studies have shown that the tumor immune microenvironment plays an important role in the occurrence, metastasis, and prognosis of breast cancer and has become one of the new therapeutic targets for breast cancer . TILs are major participants in the tumor immune microenvironment. As early as 1992, researchers first linked breast cancer with TILs. Studies have shown that in breast cancers with high proliferation rates, a large infiltration of lymphocytes seems to improve the recurrence-free survival rate of patients . Since then, research on the relationship between TILs and the development and prognosis of breast cancer has entered a new stage. Currently, there have been many reports on factors affecting SLNM, such as age, cT stage, pathological nuclear grading, ER, PR, HER2, etc. . In our study, the incidence of SLNM was similar to previous reports, with cT stage being one of the predictive factors. However, there have been few reports on the correlation between TILs and SLNM. Previous studies have shown that TILs can serve as predictive factors for lymph node metastasis in early gastric cancer and melanoma . In addition, there are also relevant studies reporting that TILs density can be a predictive factor for SLNM in cT1N0 breast cancer . It is worth noting that our study is a study to establish the correlation between TILs and SLNM in early-stage (cT1-2N0) breast cancer and to establish a clinical prediction model. After defining the critical value of TILs density as 10%, we found that breast cancer with high TILs had a lower incidence of SLNM, and TILs and cT stage became independent predictive factors for SLNM. Furthermore, when we analyzed each subgroup, similar to previous reports , TNBC had the highest TILs density, followed by HER2 enriched breast cancer, with the lowest TILs density in luminal breast cancer. For luminal breast cancer, the relationship between TILs and SLNM was similar to the overall population and also had independent predictive power. However, for TNBC, when divided into LPBC and nLPBC based on a TILs density of 50%, it was clear that LPBC had a lower SLNM rate, and LPBC was an independent predictive factor for SLNM in TNBC. Regrettably, for HER2 enriched breast cancer, due to the insufficient number of cases in our study, no significant results could be drawn. TNBC has a higher risk of genetic mutations, and studies have shown that for patients with BRCA1 and BRCA2 mutations, TILs have been proven to be a favorable factor for disease-free survival . In terms of overall survival, a 10% increase in TILs density reduces the mortality rate of BRCA1 carriers by 10%, while there is no significant impact on the mortality rate of BRCA2 mutation patients. For patients undergoing neoadjuvant therapy, studies have shown that patients with high TILs have significantly improved pCR rates, DDFS, and OS . Our study indicates that in TNBC, TILs have a more significant predictive value for SLNM. Our study results show the important predictive value of TILs as an indicator of the tumor immune microenvironment (TIME) for the incidence of SLNM in early-stage (cT1-2N0) breast cancer. For other immune cells and factors in TIME, current research has shown that CD8+ T cells, CD4+ T cells, and Foxp3+ regulatory T cells (Tregs) are key to immune surveillance and tolerance . A decrease in the number of CD8+ T cells, an increase in Foxp3+ Tregs, and an increased ratio of Foxp3+ Treg/CD4+ T cells are significantly associated with lymph node metastasis and prognosis . In addition, for TNBC, the presence of PD-L1+ cells also predicts a good prognosis for patients . However, there is little research on the relationship between TIME and SLNM in early-stage breast cancer, which requires further exploration. It is worth noting that our study has a certain degree of advancement and is a study to establish the correlation between TILs and SLNM in early-stage (cT1-2N0) breast cancer and to establish a clinical prediction model. However, we are also aware of certain limitations in our study. Firstly, as we mentioned earlier, the number of patients with HER2 enriched breast cancer was too small to draw effective conclusions. In addition, our study is a single-center retrospective study with a limited sample size and representativeness, and the established prediction model still needs to be verified through multi-center, large-scale clinical studies in the future.
Our study shows that the density of TILs and cT stage are independent predictive factors for SLNM in early-stage (cT1-2N0) breast cancer, with significant predictive effects for SLNM in Luminal breast cancer and TNBC.
|
In‐depth molecular analysis of combined and co‐primary pulmonary large cell neuroendocrine carcinoma and adenocarcinoma | cabb6443-b104-414d-a045-84cfcb80d9a4 | 9298697 | Anatomy[mh] | INTRODUCTION Adenocarcinoma (ADC) is the most common type of lung cancer, and oncogenesis is often driven by well‐known mutually exclusive oncogenes, for example, KRAS and EGFR . , In the last decades, tyrosine kinase inhibitors (TKIs) have been developed to target those oncogenes. Survival rates of stage‐IV disease have significantly been improved applying these new therapies. Resistance mechanisms to TKIs include additional mutations in the driver gene, the downstream signaling pathway, bypass signaling pathways, or transformation to small cell lung carcinoma (SCLC) or, less frequently, large cell neuroendocrine carcinoma (LCNEC). , , , , , The two latter mechanisms are associated with RB1 mutations in addition to TP53 mutations. , , , LCNEC is a rare pulmonary tumor, accounting for 1% to 3% of all lung carcinoma. , , , LCNEC is characterized by neuroendocrine morphology and positive immunohistochemical (IHC) staining of at least one neuroendocrine marker (Cd56, Chromogranin A and/or Synaptophysin). Besides the before mentioned transformation of ADC to LCNEC, other pathways of LCNEC oncogenesis are also involved. LCNEC seems to be a heterogeneous disease with clinically relevant subgroups. , , Almost half of LCNECs are mutated in both TP53 and RB1 , and since this is a feature of SCLC, this is called the SCLC‐like subtype. , , Another part of LCNECs harbor mutations in oncogenes identified in nonsmall cell lung carcinoma (NSCLC), for example, KEAP1 , STK11 , EGFR or KRAS , often in combination with TP53 mutations (NSCLC‐like subtype). , , Interestingly, some LCNECs are combined with morphologically separate areas of ADC and/or squamous cell carcinoma, reported in up to 14% of LCNEC. , , The two morphological distinct parts, one with clear neuroendocrine morphology, distinguish those combined tumors from NSCLC with neuroendocrine differentiation (NSCLC morphology with expression of neuroendocrine markers). Combined tumors may evolve due to a collision of two separate tumor nodules. , Alternatively, the combined tumor might be the result of transformation of ADC toward neuroendocrine carcinoma in part of the tumor, in analogy to neuroendocrine transformation after TKI treatment, or vice versa. , A combined tumor might also be the result of two divergent differentiation lineages of a tumor stem cell. This divergence might take place early in tumorigenesis or as a late event, resulting in a high overlap of mutations in both tumor parts. A clonal relationship between the two lesions has been shown for transformed tumors due to TKI treatment and for combined SCLC‐NSCLC tumors, but has not adequately been investigated between neuroendocrine and nonneuroendocrine regions of combined LCNEC‐NSCLC tumors. , , , , In addition, some lung cancer patients have two or more synchronous ipsilateral pulmonary lesions at diagnosis. Such lesions might be metastases of the primary tumor or a second independent primary tumor. Incidence of such co‐primary lung tumors has been reported to be 1% to 7% in surgical series and up to 16% in more recent and unselected series. , , , , , , , Only limited reports on LCNEC as part of co‐primary ipsilateral lung tumors are available. , , According to current guidelines, two lung lesions with a different histologic subtype should be regarded as independent primary tumors. However, some studies have shown clonality between multiple lesions with different histologic NSCLC subtypes, indicating that a common origin cannot be excluded. , In our study, we performed an in‐depth analysis of molecular, neuroendocrine and clinicopathological characteristics of 10 combined LCNEC‐ADC tumors. Furthermore, we analyzed the characteristics of five ipsilateral synchronous pulmonary lesions, including at least one single tumor nodule with LCNEC.
MATERIALS AND METHODS 2.1 Sample selection Pathology reports of patients with LCNEC diagnosed in the Netherlands between 2003 and 2012 were retrieved from PALGA, the nationwide network and registry of histo‐ and cytopathology in the Netherlands ( ). , All reports were assessed by two researchers (B.H. and J.D.). All resection specimens containing both LCNEC and ADC morphology in one sample were identified for the “combined LCNEC” group. Samples with positive neuroendocrine IHC markers but exclusively ADC morphology were regarded as NSCLC with neuroendocrine differentiation and not included in our study. All cases with two resected synchronous ipsilateral pulmonary lesions, one being (partly) LCNEC and one being ADC, were selected for the “co‐primary tumor” group. Central revision by three experienced lung pathologists (R.v.S, L.H. and J.v.d.T.) was performed for those samples. Only samples with the LCNEC‐part fulfilling the WHO‐classification criteria (2015) for LCNEC (ie, neuroendocrine morphology and at least one neuroendocrine marker with ≥10% staining) and the ADC‐part for ADC were included. Furthermore, the two parts had to be adequately distinguishable, and both parts should comprise a substantial percentage of the total tumor (ie, ≥10%). Patients who had received neo‐adjuvant chemotherapy were excluded. 2.2 DNA isolation For each sample, four 10 μm slides were cut from a formalin‐fixed paraffin‐embedded (FFPE) block for DNA isolation, and before and after a 4 μm slide was cut for hematoxylin‐eosin (HE) staining. Two experienced pulmonary pathologists (L.H. and J.v.d.T.) marked LCNEC‐ and ADC‐parts on those HE slides and estimated tumor cell percentages (minimally 30%). The 10 μm slides were hematoxylin stained, and manual micro‐dissection was performed under a dissecting microscope. Selected parts with maximum distance between the two parts were dissected, to avoid dissection from any transition area ( ). The dissected tissue fragments were incubated overnight at 56°C in 5% Chelex (Chelex 100 Resin [BioRad] in lysis buffer solution [Promega]) and 20 mg/mL proteinase K, mixed in a ratio 10:1. Next, the samples were incubated for 10 minutes at 95°C, and after centrifuging, the supernatant was collected. 2.3 Mutational and copy number variation analysis Targeted next generation sequencing was performed by semiconductor sequencing with the Ion Torrent platform using the supplier's materials and protocols (Thermo Fisher Scientific) with a custom‐made dedicated panel for mutational analysis (65 genes), including genes frequently mutated in ADC ( EGFR , KRAS , BRAF and ALK [mutation hotspots]) and LCNEC ( RB1 [coding coverage 99%], TP53 [100%], KEAP1 [100%], STK11 [100%] and NOTCH1 [exon 26 and 27]) ( ). In addition, the panel comprised 262 highly polymorphic single nucleotide polymorphism (SNP) amplicons for copy number variation (CNV) detection (chromosomes: 1p, 2p, 3p, 5q, 6p, 7pq, 8pq, 9p, 10q, 11q, 12q, 13q, 15q, 16q, 17pq, 18q, 19pq and Xpq). Library and template preparations were performed consecutively with the AmpliSeq Library Kit 2.0‐384 LV and the Ion 540 Chef kit. Sequencing was performed on a 540 chip with the Ion GeneStudio S5XL system. Data were analyzed with Sequence Pilot Analysis Software (JSI Medical Systems). For each patient, normal tissue was included as a reference. For quality control, only variants with an amplicon coverage of >100 were taken into account. DNA variants, which were also present in normal tissue, were regarded as polymorphisms. CNV (ie, amplifications, gains and deletions) was analyzed by normalized coverage using the Sequence Pilot Analysis Software. Homozygous deletions of RB1 were confirmed by fluorescence in situ hybridization (FISH). In addition, more sensitive SNP‐based CNV analysis was performed as described earlier. 2.4 Immunohistochemistry Automated IHC staining for p53, pRb, Ascl1, Rest, NeuroD1, Cd56, Chromogranin A, Synaptophysin, Sox1 and Ki‐67 was performed for all samples on 4 μm tissue sections on coated glass slides with the DAKO auto stainer (Agilent, Santa Clara, CA). A list of antibodies with dilution and information on the protocol (pH antigen retrieval and use of linkers) is provided in . Tissue micro arrays (TMAs) with material from resected confirmed pure ADC (N = 37) and resected confirmed pure LCNEC (N = 17) were used as a reference. Protein expression was assessed for percentage of positive tumor cells (0%‐100%) and staining intensity (0, 1, 2 or 3) by B. H., J. D. and J. v. d. T. H‐scores were calculated by multiplying percentage of positive tumor cells by intensity. Ki‐67 proliferation index was assessed by eyeball estimation by J.v.d.T. Type of staining (membranous, cytoplasmic or membranous) and cut‐off values for the different antibodies are shown in . 2.5 Statistical analysis All analyses were performed using SPSS (version 25 for Windows, Armonk, New York: IBM Corp.). Patient characteristics were analyzed with descriptive statistics. Median overall survival (OS) was estimated with Kaplan‐Meier analysis and is presented with a 95% confidence interval (95% CI). For each IHC marker, expression in the four histological subtypes (pure ADC, combined tumor ADC‐part, combined tumor LCNEC‐part and pure LCNEC) is reported, and associations between histology and IHC marker expression were evaluated with chi‐square or Fisher's exact test, followed by multiple post hoc tests if appropriate. Median H‐scores were calculated for all IHC markers in the four different histologic groups. Differences in H‐scores between the histologic subgroups were tested with Kruskal‐Wallis Test followed by multiple post hoc Mann‐Whitney U tests, if appropriate. P values <.05 were considered significant.
Sample selection Pathology reports of patients with LCNEC diagnosed in the Netherlands between 2003 and 2012 were retrieved from PALGA, the nationwide network and registry of histo‐ and cytopathology in the Netherlands ( ). , All reports were assessed by two researchers (B.H. and J.D.). All resection specimens containing both LCNEC and ADC morphology in one sample were identified for the “combined LCNEC” group. Samples with positive neuroendocrine IHC markers but exclusively ADC morphology were regarded as NSCLC with neuroendocrine differentiation and not included in our study. All cases with two resected synchronous ipsilateral pulmonary lesions, one being (partly) LCNEC and one being ADC, were selected for the “co‐primary tumor” group. Central revision by three experienced lung pathologists (R.v.S, L.H. and J.v.d.T.) was performed for those samples. Only samples with the LCNEC‐part fulfilling the WHO‐classification criteria (2015) for LCNEC (ie, neuroendocrine morphology and at least one neuroendocrine marker with ≥10% staining) and the ADC‐part for ADC were included. Furthermore, the two parts had to be adequately distinguishable, and both parts should comprise a substantial percentage of the total tumor (ie, ≥10%). Patients who had received neo‐adjuvant chemotherapy were excluded.
DNA isolation For each sample, four 10 μm slides were cut from a formalin‐fixed paraffin‐embedded (FFPE) block for DNA isolation, and before and after a 4 μm slide was cut for hematoxylin‐eosin (HE) staining. Two experienced pulmonary pathologists (L.H. and J.v.d.T.) marked LCNEC‐ and ADC‐parts on those HE slides and estimated tumor cell percentages (minimally 30%). The 10 μm slides were hematoxylin stained, and manual micro‐dissection was performed under a dissecting microscope. Selected parts with maximum distance between the two parts were dissected, to avoid dissection from any transition area ( ). The dissected tissue fragments were incubated overnight at 56°C in 5% Chelex (Chelex 100 Resin [BioRad] in lysis buffer solution [Promega]) and 20 mg/mL proteinase K, mixed in a ratio 10:1. Next, the samples were incubated for 10 minutes at 95°C, and after centrifuging, the supernatant was collected.
Mutational and copy number variation analysis Targeted next generation sequencing was performed by semiconductor sequencing with the Ion Torrent platform using the supplier's materials and protocols (Thermo Fisher Scientific) with a custom‐made dedicated panel for mutational analysis (65 genes), including genes frequently mutated in ADC ( EGFR , KRAS , BRAF and ALK [mutation hotspots]) and LCNEC ( RB1 [coding coverage 99%], TP53 [100%], KEAP1 [100%], STK11 [100%] and NOTCH1 [exon 26 and 27]) ( ). In addition, the panel comprised 262 highly polymorphic single nucleotide polymorphism (SNP) amplicons for copy number variation (CNV) detection (chromosomes: 1p, 2p, 3p, 5q, 6p, 7pq, 8pq, 9p, 10q, 11q, 12q, 13q, 15q, 16q, 17pq, 18q, 19pq and Xpq). Library and template preparations were performed consecutively with the AmpliSeq Library Kit 2.0‐384 LV and the Ion 540 Chef kit. Sequencing was performed on a 540 chip with the Ion GeneStudio S5XL system. Data were analyzed with Sequence Pilot Analysis Software (JSI Medical Systems). For each patient, normal tissue was included as a reference. For quality control, only variants with an amplicon coverage of >100 were taken into account. DNA variants, which were also present in normal tissue, were regarded as polymorphisms. CNV (ie, amplifications, gains and deletions) was analyzed by normalized coverage using the Sequence Pilot Analysis Software. Homozygous deletions of RB1 were confirmed by fluorescence in situ hybridization (FISH). In addition, more sensitive SNP‐based CNV analysis was performed as described earlier.
Immunohistochemistry Automated IHC staining for p53, pRb, Ascl1, Rest, NeuroD1, Cd56, Chromogranin A, Synaptophysin, Sox1 and Ki‐67 was performed for all samples on 4 μm tissue sections on coated glass slides with the DAKO auto stainer (Agilent, Santa Clara, CA). A list of antibodies with dilution and information on the protocol (pH antigen retrieval and use of linkers) is provided in . Tissue micro arrays (TMAs) with material from resected confirmed pure ADC (N = 37) and resected confirmed pure LCNEC (N = 17) were used as a reference. Protein expression was assessed for percentage of positive tumor cells (0%‐100%) and staining intensity (0, 1, 2 or 3) by B. H., J. D. and J. v. d. T. H‐scores were calculated by multiplying percentage of positive tumor cells by intensity. Ki‐67 proliferation index was assessed by eyeball estimation by J.v.d.T. Type of staining (membranous, cytoplasmic or membranous) and cut‐off values for the different antibodies are shown in .
Statistical analysis All analyses were performed using SPSS (version 25 for Windows, Armonk, New York: IBM Corp.). Patient characteristics were analyzed with descriptive statistics. Median overall survival (OS) was estimated with Kaplan‐Meier analysis and is presented with a 95% confidence interval (95% CI). For each IHC marker, expression in the four histological subtypes (pure ADC, combined tumor ADC‐part, combined tumor LCNEC‐part and pure LCNEC) is reported, and associations between histology and IHC marker expression were evaluated with chi‐square or Fisher's exact test, followed by multiple post hoc tests if appropriate. Median H‐scores were calculated for all IHC markers in the four different histologic groups. Differences in H‐scores between the histologic subgroups were tested with Kruskal‐Wallis Test followed by multiple post hoc Mann‐Whitney U tests, if appropriate. P values <.05 were considered significant.
RESULTS 3.1 Patient selection and pathological review Screening of 305 LCNEC pathology reports identified 27 LCNEC with combined and/or a co‐primary LCNEC‐ADC diagnosis. After pathological review, combined LCNEC‐ADC morphology was confirmed in eight patients, combined LCNEC‐ADC with an ADC co‐primary tumor in two patients and co‐primary LCNEC and ADC tumors in three patients. These 13 unique patients were included in the combined LCNEC‐ADC group (N = 10, 3%) and/or the group with co‐primary synchronous ipsilateral LCNEC and ADC tumors (N = 5, 2%) ( ). In all combined tumors, clearly distinguishable parts of both LCNEC and ADC were identified (Figure ). In some of the tumors, a transition area with characteristics of both LCNEC and ADC was also present (Figure ). Patient characteristics are presented in Figures and . Median OS was 31 months (95% CI 27‐35 months) in the combined tumor group and 23 months (95% CI 17‐29 months) in the co‐primary group. 3.2 Mutational analysis Tumor clonality was indicated by shared (non‐hotspot) mutations in 10/10 combined LCNEC‐ADC tumors, while only in 1/5 co‐primary tumors, a clonal relation was confirmed using mutation and CNV analysis. These shared mutations were not found in the analyzed normal tissue of the respective patients, excluding germline mutations. At least two identical somatic mutations were found in 8/10 combined tumors with a median of 2 (range 1‐4) mutations (Figure and ). Of all identified mutations (N = 35) in the combined LCNEC‐ADC tumors, N = 23 (66%) were identified in both parts. Commonly identified identical mutations in both combined tumor parts included mutations in TP53 (90%), RB1 (30%), KEAP1 (30%), STK11 (30%) and KRAS (30%). A total of N = 5 (14%) different mutations were unique to LCNEC‐parts and N = 7 (20%) to ADC‐parts. Furthermore, homozygous deletion of RB1 (confirmed by FISH) was found in one patient in both the LCNEC‐ and ADC‐parts, and amplification of CCNE1 was found in the LCNEC‐part of another patient (Figure ). In the co‐primary tumors, clonality was only demonstrated in a combined LCNEC‐ADC with also a co‐primary ADC (Patient 13). This patient had two identical somatic mutations in both the ADC‐part and LCNEC‐part of the combined lesion and the second ADC lesion. No clonal relation was established in the three patients with pure co‐primary tumors as well as in the other combined LCNEC‐ADC with co‐primary ADC (Patient 14) (Figure and ). The sequencing coverage and quality statistics for each sample are summarized in . 3.3 Immunohistochemical staining IHC markers were evaluated in LCNEC‐ and ADC‐parts of combined tumors (Figure ) and pure LCNEC and ADC as a reference. All combined cases had a nonwildtype p53 staining pattern in both LCNEC‐ and ADC‐parts, with upregulation in 3/10 cases and loss of p53 staining in 7/10 cases, in agreement with mutational analysis (Figure ). In 4/10 combined cases, both LCNEC‐ and ADC‐parts had loss of pRb expression, and RB1 was inactivated in 3/4 of those cases (mutation or homozygous deletion). In two additional cases, pRb was only lost in the LCNEC‐part, and the inactivation mechanism for RB1 was not found (ie, no RB1 mutations or homozygous deletion) (Figure ). Evaluation of transcription factors regulating neuroendocrine differentiation showed upregulation for Ascl1 and downregulation of Rest in LCNEC‐parts of combined tumors and pure LCNEC, compared with expression in pure ADC and ADC‐parts of the combined tumors (Figure and ). Expression of neuroendocrine markers was found in 10/10 LCNEC‐parts and in 5/10 ADC‐parts of combined tumors (Figure ). In the latter parts, a slightly increased expression for neuroendocrine markers was observed most closely to LCNEC‐parts, or an increased neuroendocrine marker expression was found in single cells in the entire ADC‐part (Figure ). The number and intensity of positive neuroendocrine markers increased comparing pure ADC (low) with combined ADC (intermediate) and combined and pure LCNEC (high) (Figure and ). Ttf1 expression was positive in all cases, though a significantly lower median H‐score was found in both pure ADC and pure LCNEC compared with their equivalents in the combined tumors (Figure and ). For Sox1, a slight increase in positive cases and H‐scores was observed in ADC‐parts of combined tumors compared with pure ADC (Figure and ). No differences were found for NeuroD1 expression (Figure and ). Median Ki‐67 proliferation index was 30 in ADC‐parts of combined tumors and 50 in LCNEC‐parts ( P = .077) (Figure and ).
Patient selection and pathological review Screening of 305 LCNEC pathology reports identified 27 LCNEC with combined and/or a co‐primary LCNEC‐ADC diagnosis. After pathological review, combined LCNEC‐ADC morphology was confirmed in eight patients, combined LCNEC‐ADC with an ADC co‐primary tumor in two patients and co‐primary LCNEC and ADC tumors in three patients. These 13 unique patients were included in the combined LCNEC‐ADC group (N = 10, 3%) and/or the group with co‐primary synchronous ipsilateral LCNEC and ADC tumors (N = 5, 2%) ( ). In all combined tumors, clearly distinguishable parts of both LCNEC and ADC were identified (Figure ). In some of the tumors, a transition area with characteristics of both LCNEC and ADC was also present (Figure ). Patient characteristics are presented in Figures and . Median OS was 31 months (95% CI 27‐35 months) in the combined tumor group and 23 months (95% CI 17‐29 months) in the co‐primary group.
Mutational analysis Tumor clonality was indicated by shared (non‐hotspot) mutations in 10/10 combined LCNEC‐ADC tumors, while only in 1/5 co‐primary tumors, a clonal relation was confirmed using mutation and CNV analysis. These shared mutations were not found in the analyzed normal tissue of the respective patients, excluding germline mutations. At least two identical somatic mutations were found in 8/10 combined tumors with a median of 2 (range 1‐4) mutations (Figure and ). Of all identified mutations (N = 35) in the combined LCNEC‐ADC tumors, N = 23 (66%) were identified in both parts. Commonly identified identical mutations in both combined tumor parts included mutations in TP53 (90%), RB1 (30%), KEAP1 (30%), STK11 (30%) and KRAS (30%). A total of N = 5 (14%) different mutations were unique to LCNEC‐parts and N = 7 (20%) to ADC‐parts. Furthermore, homozygous deletion of RB1 (confirmed by FISH) was found in one patient in both the LCNEC‐ and ADC‐parts, and amplification of CCNE1 was found in the LCNEC‐part of another patient (Figure ). In the co‐primary tumors, clonality was only demonstrated in a combined LCNEC‐ADC with also a co‐primary ADC (Patient 13). This patient had two identical somatic mutations in both the ADC‐part and LCNEC‐part of the combined lesion and the second ADC lesion. No clonal relation was established in the three patients with pure co‐primary tumors as well as in the other combined LCNEC‐ADC with co‐primary ADC (Patient 14) (Figure and ). The sequencing coverage and quality statistics for each sample are summarized in .
Immunohistochemical staining IHC markers were evaluated in LCNEC‐ and ADC‐parts of combined tumors (Figure ) and pure LCNEC and ADC as a reference. All combined cases had a nonwildtype p53 staining pattern in both LCNEC‐ and ADC‐parts, with upregulation in 3/10 cases and loss of p53 staining in 7/10 cases, in agreement with mutational analysis (Figure ). In 4/10 combined cases, both LCNEC‐ and ADC‐parts had loss of pRb expression, and RB1 was inactivated in 3/4 of those cases (mutation or homozygous deletion). In two additional cases, pRb was only lost in the LCNEC‐part, and the inactivation mechanism for RB1 was not found (ie, no RB1 mutations or homozygous deletion) (Figure ). Evaluation of transcription factors regulating neuroendocrine differentiation showed upregulation for Ascl1 and downregulation of Rest in LCNEC‐parts of combined tumors and pure LCNEC, compared with expression in pure ADC and ADC‐parts of the combined tumors (Figure and ). Expression of neuroendocrine markers was found in 10/10 LCNEC‐parts and in 5/10 ADC‐parts of combined tumors (Figure ). In the latter parts, a slightly increased expression for neuroendocrine markers was observed most closely to LCNEC‐parts, or an increased neuroendocrine marker expression was found in single cells in the entire ADC‐part (Figure ). The number and intensity of positive neuroendocrine markers increased comparing pure ADC (low) with combined ADC (intermediate) and combined and pure LCNEC (high) (Figure and ). Ttf1 expression was positive in all cases, though a significantly lower median H‐score was found in both pure ADC and pure LCNEC compared with their equivalents in the combined tumors (Figure and ). For Sox1, a slight increase in positive cases and H‐scores was observed in ADC‐parts of combined tumors compared with pure ADC (Figure and ). No differences were found for NeuroD1 expression (Figure and ). Median Ki‐67 proliferation index was 30 in ADC‐parts of combined tumors and 50 in LCNEC‐parts ( P = .077) (Figure and ).
DISCUSSION We present a unique cohort of 10 combined LCNEC‐ADC tumors and show that both histological tumor parts are clonally related in all cases, whereas only one out of five synchronous ipsilateral LCNEC and ADC tumors was clonally related. Common mutations found in ADC (ie, TP53/EGFR/KRAS/STK11 and KEAP1 ) as well as in SCLC and SCLC‐like LCNEC (ie, RB1 inactivation) were observed in both parts of combined LCNEC‐ADC. The latter finding is of interest, because RB1 mutations are frequently found in EGFR mutated ADC transforming into SCLC (and LCNEC) under TKI treatment. , , , Hence, combined LCNEC‐ADC may develop from a common cell of origin related to ADC, in which inactivation of genes such as RB1 or dysregulation of Ascl1(+) and Rest(−) may promote neuroendocrine transformation. An overview of available literature on commonly mutated genes in combined LCNEC‐ADC and pure LCNEC is provided in Table . , , , , , Similar to LCNEC, almost all combined LCNEC‐ADC harbor TP53 mutations. , , , , Furthermore, other mutations related to ADC were found in 8/10 patients in our study. Especially, KRAS and EGFR mutations occur more frequently in combined LCNEC‐ADC tumors compared with pure LCNEC tumors, which might be relevant for treatment with targeted therapy of those patients. , , , , , , In our study, we found pRb inactivation in 7/10 patients with combined LCNEC‐ADC ( RB1 mutation or loss of pRb expression). The difference between RB1 mutational status and pRb expression might be explained by production of nonfunctional pRb and by additional mechanisms for pRb inactivation, that is, gene rearrangement, epigenetic inactivation or p16 inactivation. , Ito et al found RB1 mutations in 4/10 cases and loss of pRb expression in 7/10 cases of combined LCNEC‐ADC tumors. In the five LCNEC‐ADC cases presented by Miyoshi et al, 1/5 tumors had an RB1 mutation, but indications for other mechanisms of pRb inactivation were not investigated. Milione et al found RB1 mutations in only 1/16 tumors and loss of pRb expression in 8/26 tumors; however, only mutations in hotspot areas were analyzed in our study. In all, a frequent inactivation of pRb is found in combined LCNEC‐ADC, which is comparable to incidences in general LCNEC. , , However, RB1 mutations are rare in ADC, and therefore, we would have expected to find a lower percentage of RB1 mutations, especially in ADC‐parts. It has been shown that RB1 mutations can result in BRN2 upregulation leading to neuroendocrine differentiation. , Apparently, RB1 inactivation by mutations or other mechanisms have an important role in the development of combined LCNEC‐ADC lesions. This is in concordance with RB1 mutations found in NSCLC tumors with EGFR mutations transforming to SCLC or LCNEC during the course of TKI therapy. , , , Because we and others found a clonal relationship between LCNEC‐ and ADC‐parts of combined tumors, a common cell of origin is likely. Presumably, this is a nonneuroendocrine cell, because ADC is known to origin from nonneuroendocrine cells, and development of LCNEC from nonneuroendocrine cells has also been reported in mouse models. , Even the two combined LCNEC identified as SCLC‐like most likely have a nonneuroendocrine cell of origin, considering the clear nonneuroendocrine morphology of the ADC‐part. Immunohistochemistry revealed that the number and intensity of positive neuroendocrine markers and Ascl1 expression increased comparing pure ADC with combined ADC and combined and pure LCNEC. Furthermore, some combined ADC‐parts showed sparse, scattered single cell neuroendocrine marker expression while others had increased expression near the LCNEC‐part. This argues for aberrant differentiation in the transition from ADC to LCNEC, in which some of the tumor cells already express neuroendocrine markers, despite conservation of clear morphological characteristics of ADC. Theoretically, it could also be possible that LCNEC tumors differentiate to ADC. However, this is less likely due to the less aggressive behavior of NSCLC compared with LCNEC, as is also reflected by the trend toward a lower median Ki‐67 proliferation index in ADC‐parts compared with LCNEC‐parts of combined tumors in our study. Furthermore, temporal transformation of LCNEC towards ADC during active treatment has never been reported, in contrast to the cases of transformation from ADC to LCNEC during TKI treatment. , , Nowadays, tumors with nonsmall cell, nonneuroendocrine morphology but with positive staining of neuroendocrine markers are regarded as “NSCLC with neuroendocrine differentiation” and treated as NSCLC. However, those tumors might resemble ADC‐parts of the combined tumors. Relevance of this neuroendocrine profile in ADC has been shown previously by inferior survival in Ascl1+ ADC patients and ADC patients with an Ascl1‐associated gene expression signature. , , It is tempting to speculate that ADC tumors with expression of Ascl1 or neuroendocrine markers are also a reflection of an aberrant differentiation process from ADC to LCNEC. Further studies should focus on morphological, histological, mutational and clinical features of these special tumors to evaluate clinical relevance. A couple of molecular mechanisms have been reported possibly underlying development of neuroendocrine differentiation in tumors, for example, pRb inactivation, Ascl1 upregulation or Rest downregulation. , , , , We found RB1 mutations and homozygous deletions or loss of functional pRb that might have been the trigger for neuroendocrine differentiation in Patients 1, 3, 5, 7, 8, 9 and 13. In the LCNEC‐part of the combined tumor of Patient 2, Ascl1 was upregulated and Rest downregulated, which might explain neuroendocrine differentiation in this part of the tumor. In Patients 4 and 14, neuroendocrine differentiation might have been driven by Ascl1 upregulation, which was already present in the ADC‐parts of both tumors. Whether or not the expression of Ascl1 is the result of another underlying mechanism driving neuroendocrine differentiation (eg, Notch1 silencing) remains to be studied. , , , In SCLC expression of the transcriptional regulator, NeuroD1 is an important feature in a subgroup of patients. However, we did not find a difference in NeuroD1 expression between ADC‐parts and LCNEC‐parts of combined tumors, and therefore, NeuroD1 seems not to have an obvious regulatory role in these combined LCNEC‐ADC tumors. In contrast to high clonality found in combined tumors, clonality existed in only one out of five sets of co‐primary LCNEC and ADC tumors. For this case (Patient 13) with combined LCNEC‐ADC and ipsilateral co‐primary ADC, management or the staging category (IIIA) was not impacted in retrospect. A clonal relationship was demonstrated before in co‐primary NSCLC lesions (mainly ADC) with different morphologic subtypes by evaluation of 20 lung cancer genes, but a clonal relationship has never been reported for co‐primary tumors including LCNEC. , Therefore, staging of co‐primary tumors remains a delicate matter, and mutational analysis could be used to evaluate clonal relationship when considered crucial for staging and treatment decisions. In our study, we could only include 10 combined lesions and 5 patients with co‐primary tumors, identified from a dataset of 305 resected LCNEC cases in the Netherlands. The main reason for the low percentage of included patients compared with other studies is the very strict criteria we used to select a homogeneous population to secure the quality of the study. , We only selected combined LCNEC‐ADC cases and excluded cases with squamous cell carcinoma, since more is known about targetable mutations and transformation to neuroendocrine carcinomas under the course of therapy in ADC. Furthermore, we restricted selection to cases with adequately distinguishable parts of ADC and LCNEC, both sufficient for microdissection of DNA. Tumors with solely intermingled parts and tumors with amphicrine cells were not included. In conclusion, our data indicate that combined tumors with LCNEC‐ and ADC‐parts, identifiable according to WHO criteria, are clonally related, with a high rate of mutations frequently encountered in pure ADC but also pRb inactivation, associated with neuroendocrine differentiation. This finding points to a common cell of origin of both histologically different neoplastic lesions. Co‐primary, but separate LCNEC and ADC tumors were in all but one case not clonally related, indicating that these tumors should be regarded as two primary lesions instead of metastatic disease. In these cases, clonality analysis should be used if considered crucial for staging and treatment decisions.
All conflicts disclosed are outside the study. Bregtje C. M. Hermans reports grants from Bristol‐Myers Squibb, nonfinancial support from Abbvie; Jules L. Derks reports grants from Bristol‐Myers Squibb, nonfinancial support from Abbvie, personal fees from BMS, personal fees from Pfizer, personal fees from Boehringer‐Ingelheim, personal fees from Novartis, personal fees from Ipsen; Jan H. von der Thüsen reports personal fees from Roche, Roche Diagnostics, Bristol‐Myers Squibb, Eli Lily, MSD and grants from Bristol‐Myers Squibb and AstraZeneca; Wim Timens reports fees to Institution (UMCG) from Roche Diagnostics/Ventana, Merck Sharp Dohme, Bristol‐Myers Squibb and AbbVie; Winand N. M. Dinjens reports personal fees from Amgen, Bayer, Bristol‐Myers Squibb, Novartis and Roche, laboratory research fees from AstraZeneca, Bristol‐Myers Squibb and Abbvie; Hendrikus J. Dubbink reports grants, personal fees and nonfinancial support from AstraZeneca, personal fees from AbbVie, Bayer, Janssen, Pfizer and Lilly, nonfinancial support from Illumina, grants from Merck; Ernst‐Jan M. Speel reports grants from AstraZeneca, Pfizer, Novartis and Bayer, personal fees from Amgen, Lilly and Novartis, nonfinancial support from Abbvie and Biocartis; Anne‐Marie C. Dingemans attended advisory boards and/or provided lectures for Roche, BMS, Eli Lillly, Takeda, Boehringer Ingelheim, Astra Zeneca, Pfizer, BMS, Amgen, Novartis, MSD and Pharmamar. She received research support from Amgen. All paid to the institute. The other authors did not report conflicts of interest.
Our study has been approved by the Medical Ethical Committee of Maastricht UMC+ (14‐4‐034.8/ab) and was performed according to the regulations as defined by the ‘Dutch Federal, Human Tissue and Medical Research: Code of conduct for responsible use (2011)’, not requiring patient informed consent.
Appendix S1. Supporting Information Click here for additional data file.
|
Correlation analyses of clinical and molecular findings identify candidate biological pathways in systemic juvenile idiopathic arthritis | d45c5747-aa9a-4d9b-9f18-de10a75850d5 | 3523070 | Pathology[mh] | Systemic juvenile idiopathic arthritis (SJIA) is currently classified as a subtype of juvenile idiopathic arthritis , and is characterized by a combination of arthritis and systemic inflammation, including fever, rash and serositis. SJIA has distinct demographic characteristics compared to other JIA subtypes, including onset throughout childhood and lack of gender preference. At clinical presentation, SJIA may resemble other diseases in children, including viral infection and Kawasaki disease [ - ]. The outcome in SJIA is variable, with close to half of children having a monocyclic course, less than 10% having an intermittent course, and over half having a persistent course , the latter often dominated by chronic arthritis. An adult form of SJIA is called Adult Onset Still Disease (AOSD) and occurs rarely . There are also unique immunophenotypic features in SJIA compared to other JIA subtypes, such as the lack of human leukocyte antigen (HLA) class II allele association, low or absent autoantibodies (specifically, antinuclear antibodies, rheumatoid factor or anti-CCP antibodies ), a tendency toward monocytosis , high levels of IL-18 and natural killer cell abnormalities in at least a subset of patients . These immunologic features, together with the therapeutic efficacy of inhibitors of IL-1 or IL-6 in SJIA and AOSD, suggest that these diseases might be best classified as autoinflammatory rather than autoimmune [ - ]. Despite our knowledge of some important immunological characteristics of active SJIA, the pathogenesis of SJIA remains unknown. One of the unanswered questions is whether independent biological processes underlie the systemic symptoms and the arthritis. Evidence from clinical studies shows that earlier in the disease, IL-1 inhibitors (and perhaps also IL-6 blockade) are efficacious, especially against systemic symptoms, but at a later stage, where arthritis may predominate, patients may develop resistance to these therapies [ - ]. These findings suggest that distinct biological processes may be associated with different manifestations and/or different stages of the disease. Transcriptional profiling of peripheral blood cells has been a useful approach for identifying biological pathways involved in SJIA and other complex diseases, such as polyarticular JIA (POLY), rheumatoid arthritis (RA), systemic lupus erythematosus and Kawasaki disease [ - ]. Previous studies of SJIA using microarray analyses have revealed transcriptional signatures in peripheral blood associated with active disease and with patient subsets [ - ]. We hypothesized that distinct gene expression patterns may be associated with individual clinical parameters used as measures of the systemic inflammation and the arthritis. We analyzed expression in peripheral blood mononuclear cells (PBMC) of a panel of inflammation-associated genes to determine patterns associated with elevations in two markers of disease activity in JIA, erythrocyte sedimentation rate (ESR) and number of active joints (joint count, JC). ESR is a marker of inflammation that is elevated in association with systemic as well as organ-specific inflammation, including arthritis . Active joints are defined as joints with non-bony swelling or limited range of motion, with either tenderness or pain on motion; we chose active joint count as a marker of arthritis. We asked if common or unique expression profiles are associated with ESR and JC in SJIA. In order to assess the specificity of our results for SJIA, we also asked whether the expression of the panel of tested genes differed in SJIA patients compared to patients with polyarticular course JIA (POLY), which is characterized by chronic polyarthritis. We then analyzed if JC associated genes differ during the early and late phase of SJIA. Based on the gene expression patterns, we identified candidate biological pathways associated with the systemic and arthritis components of SJIA.
Subject population and clinical data collection The study was approved by the Stanford University Administrative Panel on Human Subjects in Medical Research (protocol ID 13932). Informed consent was obtained from patients or parents or guardians before blood sample collection. Venous blood samples from all subjects were treated anonymously throughout the analysis. All JIA patients were followed at the Pediatric Rheumatology Clinic at Lucile Packard Children's Hospital. SJIA and POLY patients met amended ILAR criteria for diagnosis . Thirty-one SJIA and 18 POLY individual patients participated in this study. A total of 46 SJIA samples (22 Flare and 24 Quiescence samples), and 25 POLY samples (17 Flare and 8 Quiescence samples) were analyzed. Some patients, (SJIA n = 15; POLY n = 7) contributed samples during both flare and quiescent disease states. Twelve POLY patients were rheumatoid factor (RF) negative, and six were RF positive. All samples were classified as flare (F) or quiescence (Q) based on a scheme we developed for this and other studies of JIA [ , , ] (Tables , and and ). SJIA flare samples had a systemic score of ≥ 1 and/or an arthritis score of ≥ B (≥ 5 active joints). POLY flare samples had an arthritis score of ≥ 1 (≥ 1 to 10 active joints). Arthritis severity is scored differently for SJIA and POLY patients, because the patterns of joint involvement generally are different between the two groups , with the exception that some SJIA patients develop POLY-like arthritis with symmetric, small joint involvement. The arthritis scoring system is based on frequency analyses of numbers of active joints in early active SJIA and in active POLY [Sandborg C, frequency data not shown]. Comprehensive clinical information was collected at each patient visit, including history, physical exam and clinical laboratory values . As shown in Table , and consistent with the known demographics of JIA , SJIA patients are younger than POLY patients and are gender-balanced, whereas there are more female than male POLY patients. As expected, flare (F) patients from both SJIA and POLY cohorts differ significantly from quiescent (Q) patients for variables reflecting active inflammation: erythrocyte sedimentation rate (ESR), white blood cell count (WBC), platelets (PLT) and joint count (JC, number of affected joints). Sample processing Blood samples were obtained only when there was a clinical need for blood tests. A total of 3 to 4 ml of blood was collected directly in vacutainer cell preparation tubes (CPT) with sodium citrate (Becton Dickinson, Franklin Lakes, NJ, USA). Peripheral blood mononuclear cells (PBMCs) were isolated within three hours of collection by centrifugation of CPT tubes, per the manufacturer's instructions. RNA preparation Purified PBMCs were lysed in RLT reagent (Qiagen, Valencia, CA, USA) and lysate was stored at -80°C until RNA extraction. RNA was isolated using the RNeasy mini kit (Qiagen), per the manufacturer's instructions with an additional on-column DNase I (Qiagen) treatment for 40 minutes. The RNA concentration was measured by the Ribogreen assay (Molecular Probes, Grand Island, NY, USA) or by absorbance at 260 nm. The purity of RNA was assessed by the ratio of the absorbance readings at 260 and 280 nm. The integrity of the RNA samples was also checked by either agarose gel electrophoresis or with the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). Gene panel selection In a pilot study, paired flare/remission PBMC samples from 14 SJIA patients were processed for RNA as described and analyzed using Lymphochip cDNA microarrays (Patrick Brown, Stanford University, Stanford, CA, USA) . A large number of genes were identified as differentially expressed in flare versus remission samples by Significance Analysis of Microarrays (SAM) . Hierarchical clustering was performed with the Cluster program and visualized using TreeView (Eisen Lab, University of California, Berkeley, CA), as illustrated for a subset of genes in Additional file , and also in . The full data set, GSE37388, is released to the public on the Gene Expression Omnibus (GEO) database. From the large set, we selected genes (n = 131) representing various ontologic categories, such as signaling, transcription, inflammation and immune function. We then added other immune-related genes (n = 50) that are expressed in PBMC and implicated in JIA or RA by published reports. The genes were selected prior to analysis of any blood samples for this study, and the samples used for the microarray experiment were not re-used here. The 181 selected genes are shown on Additional file ; we confirmed that many are immune-related using the program PANTHER 7.0 (Protein ANalysis THrough Evolutionary Relationships) Classification System (Thomas Lab, University of Southern California, Los Angeles, CA, USA), which classifies proteins by their functions, using published experimental evidence and evolutionary relationships ( http://www.pantherdb.org/ ) to categorize their biological functions. This analysis showed that the largest functional category is inflammatory chemokine and cytokine signaling pathways (14.6% of the genes), followed by interleukin signaling pathways (10.8%), apoptosis signaling pathways (9.9%) and toll receptor signaling pathways (6.2%). A full list of categories covered is shown on Additional file . Gene expression detection by kinetic PCR The kinetic RT-PCR assay was performed as described . Briefly, all reactions were carried out in duplicate as a single-step RT-PCR reaction, using SYBR green chemistry. Data from duplicate reactions for each gene were averaged and normalized based on levels of expression of four housekeeping genes: eukaryotic translation elongation factor 1 alpha1 (EEF1A1), protein phosphatase 1, catalytic subunit, gamma isoform (PPP1CC), ribosomal protein L12 (RPL12), and ribosomal protein L41 (RPL41). The normalized expression level, housekeeping normalized units, of each gene was used to determine the fold change among samples. In a preliminary experiment, we found that a subset (n = 75) of our gene panel showed very limited variation in level (± 2-fold difference from the mean value) in five healthy individuals (two females and three males) over a four-month period (data not shown). Identification of ESR or JC significantly associated genes in SJIA and POLY Genes significantly associated with SJIA and POLY were determined using Pearson's correlation and Student's t -test, as explained in the Results section. Significance analysis of the canonical biological pathways The biological pathways indicated by the group of genes associated with each clinical parameter/patient cohort subset were determined by pathway analysis with Ingenuity IPA system (Ingenuity Systems, Redwood City, CA, USA; http://www.ingenuity.com ). The significance of either ESR or JC related pathways was analyzed using sparse linear discriminant analysis method, as previously described . Correlation between SJIA ESR-related and JC-related pathways was analyzed by Pearson correlation. To determine a threshold to extract pathways that significantly differentiate ESR and JC in SJIA, 500 simulated SJIA ESR-related and 500 simulated JC-related pathway data sets were created by permutation of canonical pathway identifications and their associated pathway P- values for SJIA ESR or JC. For each canonical pathway, the absolute P- value difference in logarithm form between SJIA ESR and JC was computed using one of the 500 simulated SJIA ESR and one of the 500 simulated JC pathway P -value data sets. This led to 500 absolute log P- value differences for each canonical pathway between SJIA ESR and JC, which later were sorted and 20%, 50% and 80% values were computed. Densities of the absolute differences between SJIA ESR and JC-related pathways for the original and the simulated data sets (20%, 50%, and 80%) were plotted using the R package. Comparison of the original data set and the 80 th percentile simulated data set determined the threshold to select significantly different pathways between SJIA ESR and JC. A similar approach was applied to the analysis of significantly different pathways between SJIA ESR and POLY ESR.
The study was approved by the Stanford University Administrative Panel on Human Subjects in Medical Research (protocol ID 13932). Informed consent was obtained from patients or parents or guardians before blood sample collection. Venous blood samples from all subjects were treated anonymously throughout the analysis. All JIA patients were followed at the Pediatric Rheumatology Clinic at Lucile Packard Children's Hospital. SJIA and POLY patients met amended ILAR criteria for diagnosis . Thirty-one SJIA and 18 POLY individual patients participated in this study. A total of 46 SJIA samples (22 Flare and 24 Quiescence samples), and 25 POLY samples (17 Flare and 8 Quiescence samples) were analyzed. Some patients, (SJIA n = 15; POLY n = 7) contributed samples during both flare and quiescent disease states. Twelve POLY patients were rheumatoid factor (RF) negative, and six were RF positive. All samples were classified as flare (F) or quiescence (Q) based on a scheme we developed for this and other studies of JIA [ , , ] (Tables , and and ). SJIA flare samples had a systemic score of ≥ 1 and/or an arthritis score of ≥ B (≥ 5 active joints). POLY flare samples had an arthritis score of ≥ 1 (≥ 1 to 10 active joints). Arthritis severity is scored differently for SJIA and POLY patients, because the patterns of joint involvement generally are different between the two groups , with the exception that some SJIA patients develop POLY-like arthritis with symmetric, small joint involvement. The arthritis scoring system is based on frequency analyses of numbers of active joints in early active SJIA and in active POLY [Sandborg C, frequency data not shown]. Comprehensive clinical information was collected at each patient visit, including history, physical exam and clinical laboratory values . As shown in Table , and consistent with the known demographics of JIA , SJIA patients are younger than POLY patients and are gender-balanced, whereas there are more female than male POLY patients. As expected, flare (F) patients from both SJIA and POLY cohorts differ significantly from quiescent (Q) patients for variables reflecting active inflammation: erythrocyte sedimentation rate (ESR), white blood cell count (WBC), platelets (PLT) and joint count (JC, number of affected joints).
Blood samples were obtained only when there was a clinical need for blood tests. A total of 3 to 4 ml of blood was collected directly in vacutainer cell preparation tubes (CPT) with sodium citrate (Becton Dickinson, Franklin Lakes, NJ, USA). Peripheral blood mononuclear cells (PBMCs) were isolated within three hours of collection by centrifugation of CPT tubes, per the manufacturer's instructions.
Purified PBMCs were lysed in RLT reagent (Qiagen, Valencia, CA, USA) and lysate was stored at -80°C until RNA extraction. RNA was isolated using the RNeasy mini kit (Qiagen), per the manufacturer's instructions with an additional on-column DNase I (Qiagen) treatment for 40 minutes. The RNA concentration was measured by the Ribogreen assay (Molecular Probes, Grand Island, NY, USA) or by absorbance at 260 nm. The purity of RNA was assessed by the ratio of the absorbance readings at 260 and 280 nm. The integrity of the RNA samples was also checked by either agarose gel electrophoresis or with the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA).
In a pilot study, paired flare/remission PBMC samples from 14 SJIA patients were processed for RNA as described and analyzed using Lymphochip cDNA microarrays (Patrick Brown, Stanford University, Stanford, CA, USA) . A large number of genes were identified as differentially expressed in flare versus remission samples by Significance Analysis of Microarrays (SAM) . Hierarchical clustering was performed with the Cluster program and visualized using TreeView (Eisen Lab, University of California, Berkeley, CA), as illustrated for a subset of genes in Additional file , and also in . The full data set, GSE37388, is released to the public on the Gene Expression Omnibus (GEO) database. From the large set, we selected genes (n = 131) representing various ontologic categories, such as signaling, transcription, inflammation and immune function. We then added other immune-related genes (n = 50) that are expressed in PBMC and implicated in JIA or RA by published reports. The genes were selected prior to analysis of any blood samples for this study, and the samples used for the microarray experiment were not re-used here. The 181 selected genes are shown on Additional file ; we confirmed that many are immune-related using the program PANTHER 7.0 (Protein ANalysis THrough Evolutionary Relationships) Classification System (Thomas Lab, University of Southern California, Los Angeles, CA, USA), which classifies proteins by their functions, using published experimental evidence and evolutionary relationships ( http://www.pantherdb.org/ ) to categorize their biological functions. This analysis showed that the largest functional category is inflammatory chemokine and cytokine signaling pathways (14.6% of the genes), followed by interleukin signaling pathways (10.8%), apoptosis signaling pathways (9.9%) and toll receptor signaling pathways (6.2%). A full list of categories covered is shown on Additional file .
The kinetic RT-PCR assay was performed as described . Briefly, all reactions were carried out in duplicate as a single-step RT-PCR reaction, using SYBR green chemistry. Data from duplicate reactions for each gene were averaged and normalized based on levels of expression of four housekeeping genes: eukaryotic translation elongation factor 1 alpha1 (EEF1A1), protein phosphatase 1, catalytic subunit, gamma isoform (PPP1CC), ribosomal protein L12 (RPL12), and ribosomal protein L41 (RPL41). The normalized expression level, housekeeping normalized units, of each gene was used to determine the fold change among samples. In a preliminary experiment, we found that a subset (n = 75) of our gene panel showed very limited variation in level (± 2-fold difference from the mean value) in five healthy individuals (two females and three males) over a four-month period (data not shown).
Genes significantly associated with SJIA and POLY were determined using Pearson's correlation and Student's t -test, as explained in the Results section.
The biological pathways indicated by the group of genes associated with each clinical parameter/patient cohort subset were determined by pathway analysis with Ingenuity IPA system (Ingenuity Systems, Redwood City, CA, USA; http://www.ingenuity.com ). The significance of either ESR or JC related pathways was analyzed using sparse linear discriminant analysis method, as previously described . Correlation between SJIA ESR-related and JC-related pathways was analyzed by Pearson correlation. To determine a threshold to extract pathways that significantly differentiate ESR and JC in SJIA, 500 simulated SJIA ESR-related and 500 simulated JC-related pathway data sets were created by permutation of canonical pathway identifications and their associated pathway P- values for SJIA ESR or JC. For each canonical pathway, the absolute P- value difference in logarithm form between SJIA ESR and JC was computed using one of the 500 simulated SJIA ESR and one of the 500 simulated JC pathway P -value data sets. This led to 500 absolute log P- value differences for each canonical pathway between SJIA ESR and JC, which later were sorted and 20%, 50% and 80% values were computed. Densities of the absolute differences between SJIA ESR and JC-related pathways for the original and the simulated data sets (20%, 50%, and 80%) were plotted using the R package. Comparison of the original data set and the 80 th percentile simulated data set determined the threshold to select significantly different pathways between SJIA ESR and JC. A similar approach was applied to the analysis of significantly different pathways between SJIA ESR and POLY ESR.
ESR and JC-associated gene expression in JIA ESR was chosen as a quantitative measure of systemic inflammation for our analysis, as it typically rises in association with flares of systemic symptoms and was assessed in the largest number of samples. We also considered another measure of systemic inflammation, C-reactive protein (CRP), but few samples were assessed for CRP, precluding the use of this parameter in our analysis. The number of affected counts (joint count, JC), as defined above (Introduction), was used as a quantitative measure of arthritis. Our samples were initially classified as flare or quiescence based on criteria that we have developed for analysis of JIA (Tables , and ), as previously published , and ESR and JC are part of these criteria. We performed a distribution analysis of ESR or JC values by disease states (flare/quiescence) using R Epicalc package ( http://cran.r-project.org/web/packages/epicalc/ ) to investigate if additional subgroups would be revealed. Visual inspection of the results show that the SJIA and the POLY flare patients could be partitioned into two groups related to their ESR values (Figure ): F1, with ESR values below 20, and F2, with ESR values above 20. All patients in the F1 subgroup had mild flares by our other criteria (not shown). Quiescence samples all had ESR below 20, clustering together with the F1 flare group. This analysis also showed that, in our samples, JC values in the flare and quiescence disease states are generally non-overlapping in both SJIA and POLY patients, with quiescence samples with 0 or 1 joint count, and all flare samples above zero (Figure ). We analyzed the association of the 181 gene panel with ESR and JC in both SJIA and POLY samples, using the strategies delineated in Figure . Genes whose expression was significantly associated with ESR or JC in SJIA and POLY cohorts were identified in two ways. As described in Figure , Pearson correlation analyses were performed to correlate ESR or JC values with patient expression data sets. To assess the significance of these findings, we calculated the global false discovery rate (gFDR) by 100-fold permutation of normalized kPCR data. After determining the gFDR, local FDR (lFDR) analysis can compute and assign significance measures to all features . A cut-off value of lFDR ≤ 0.05 was used to select significant genes for downstream pathway analysis. We also analyzed gene expression association using Student's t -test, as shown in Figure . For ESR, based on the analysis from Figure , we initially divided our samples into three groups: flare samples with ESR < 20 (F1), flare samples with ESR > 20 (F2), and quiescence (to ensure that differences between F1 and Quiescence were not overlooked). We identified genes whose mean expression value differed significantly between the F1 (ESR < 20) and F2 (ESR > 20) patient groups, but no differences in genes expressed by the F1 and the quiescence group were found. Subsequently, we grouped the flare F1 and the quiescence groups into one group for ESR analysis. For JC, no other partitioning was necessary, as shown in Figure , and samples were grouped into flare and quiescence groups. As we did previously for Pearson analysis, we calculated local FDR and a value of < 0.05 was considered significant (Figure ). This second analysis found genes missed by correlation analysis, as the latter requires a linear relationship and captures genes with more tightly regulated expression (small differences between F and Q samples). Pearson correlation analysis of expression data from SJIA subjects found 79 genes from our panel to be ESR-correlated and 36 genes to be JC-correlated. Student's t -test found 66 ESR-associated and 79 JC-associated genes in SJIA. This pattern differed from relationships of the expression levels of the same genes with these clinical parameters in POLY-course JIA patients: 20 ESR-correlated and no JC-correlated genes were found in POLY, and none of the genes were ESR-associated or JC-associated by Student's t -test in POLY. Combining both analyses, we found 91 ESR-related and 92 JC-related genes in SJIA, and 20 ESR-related and no JC-related genes in POLY. A list of significantly associated genes is on Additional file . Additional file (Supplementary Figure ) diagrams the fold changes in expression of the selected genes between groups (for example, F2 versus F1 + Q) and between quartiles of ESR or JC. The probability density analysis graphically represents the normalized frequency distribution of the fold ratio of the selected two groups. This result indicates that our selected genes have significant variation between groups (limited variation = fold ratio close to 1) while showing strong association. This analysis further supports our approach (Figure ) for the identification of significant associations. The reduced number of genes associated with these clinical parameters in the POLY cohort was not surprising given that the gene list was chosen in large part using expression data from SJIA PBMC. Indeed, this finding implies a degree of specificity of the associated genes for SJIA (see discussion). Another likely contribution to this difference might be the extent that disease-related processes are reflected in peripheral blood in the two disease types. Comparative analysis of SJIA ESR and JC related pathways Using the lists of associated genes, we determined biological pathways associated with each clinical parameter/patient cohort by pathway analysis with the Ingenuity IPA system. ESR-related and JC-related pathways were then compared, to investigate whether the same biological pathways are involved in ESR and JC elevations in SJIA. As shown in Figure , there is strong correlation (Pearson correlation coefficient, 0.91) between SJIA ESR and JC-related pathways (n = 189), implying that some of the same pathways play roles in the systemic and arthritic components of the disease. Shown in Figure , densities of the absolute log P- value differences of all pathways between SJIA ESR and JC, for the original and the 20, 50 and 80 percentile of the simulated random data sets, were computed and plotted. Significantly differentiating pathways between ESR and JC in SJIA revealed by this analysis are in Table (top two pathways), as were pathways that were correlated comparably with both ESR and JC (Table ). The only pathway more significantly related to SJIA ESR than to SJIA JC was the glucocorticoid (GC) receptor signaling pathway. The expression of most of the genes in this pathway was higher in samples with higher ESR compared to samples with lower ESR. In contrast, the PI3K/Akt signaling pathway was more significantly related to SJIA JC. Though the significance of the association favored JC, the expression levels of most genes in this cell survival pathway were higher in samples with higher ESR or higher JC. However, as might be expected, TP53, which encodes p53, a pro-apoptotic, negative regulator of the Akt pathway , was down-regulated in association with JC and ESR elevations. Consistent with these results, we have previously reported that purified monocytes have lower TP53 transcript levels and increased cellular resistance to apoptotic stimuli during SJIA flare compared to quiescence . Overall, the identification of some pathways that are differentially correlated with ESR and JC raises the possibility of differences in aspects of the immunobiology of arthritis compared to systemic inflammation in SJIA, as discussed below. A number of pathways were significantly related to SJIA ESR and JC to the same degree (Table ). For several of these pathways, the expression of most of the associated genes was higher in samples with higher ESR or JC. These pathways include, among others, protein kinase receptor (PKR, a pattern-recognition receptor) signaling in interferon induction, T cell and B cell signaling in the pattern of rheumatoid arthritis (RA), and (macrophage) migration inhibition factor (MIF) regulation of innate immunity. Other genes in pathways associated with activating innate responses, such as lipopolysaccharide (LPS) signaling and triggering receptor expressed on myeloid cells (TREM1) signaling are also higher samples with either higher ESR or JC. Genes in other pathways showed lower expression in samples with higher ESR or JC, such as T helper cell differentiation, iCOS-iCOSL (inducible T-cell co-stimulator/ligand) signaling in T helper cells and CD40 (co-stimulatory molecule on antigen presenting cells) signaling. Notably, these down-regulated pathways are associated with adaptive immune responses. Also down-regulated in association with elevations of both SJIA ESR and JC is the pathway for crosstalk between dendritic cells and natural killer cells, which can be involved in restriction of innate responses . Two genes, the DNA repair enzyme ATM and the transcription factor NFATC2 (also known as NFAT1), are in the pathway for RANK signaling in osteoclasts and are both down-regulated in association with systemic (ESR) and arthritic (JC) disease activity. The Rank/RankL pathway is an important regulator of bone remodeling . An ATM deficiency has been described in CD4+ T cells from rheumatoid arthritis (RA) patients , associated with premature immunosenescence. However, ATM may also be involved in bone formation, and ATM deficient animals show increased numbers of osteoclasts . The transcription factor NFATC2 has been identified as a negative regulator of cartilage cell growth . It is also important in T cell effector function, translocating to the nucleus following T cell receptor activation and regulating expression several cytokines in CD4 T cells (reviewed in ). Thus, its inverse correlation with ESR and JC may be similar to the other T cell-related pathways described above. Interestingly, hyperactivation of NFATC2 in T cells is associated with decreased susceptibility to experimental autoimmune encephalomyelitis, indicating that increased NFATC2 activity may have immunomodulatory effects that down-regulate autoaggressive reactions . Comparative analysis of SJIA and POLY ESR-related pathways We next asked whether some biological pathways involved in SJIA ESR elevation are also involved in POLY ESR elevation, by comparing SJIA and POLY ESR-related genes. As shown in Figure , there is reduced correlation (correlation coefficient, 0.59) between SJIA and POLY ESR-related pathways (n = 119), compared to the correlation we observed between SJIA ESR- and SJIA JC-related pathways. Shown in Figure , densities of the absolute logarithm P- value differences of all pathways between SJIA and POLY ESR, for the original and the 20, 50 and 80 percentile of the simulated random data sets, were computed and plotted. Several pathways differ significantly between SJIA and POLY ESR, as quantified by absolute difference between the SJIA and POLY pathways (Table ). These include: the role of macrophages, fibroblasts and endothelial cells in RA, IL-10 signaling, glucocorticoid receptor signaling, among others. These data suggest a greater role for these pathways in SJIA, compared to POLY. However, very few genes were associated with POLY ESR in most pathways, resulting in low significance of association with POLY (not shown). In these differentiating pathways, the (few) genes correlating with ESR were higher in POLY samples with higher ESR values, suggesting that these genes, perhaps in the context of other pathways or in the context of the identified pathways but within the joint, contribute to inflammation in polyarticular course JIA. Indeed, evidence from RA, the adult disease most similar to polyarticular JIA, implicates monocyte and macrophage activation and endothelial cell dysfunction , both in joints and in the periphery. As observed in the previous analysis, pathways associated with T cell responses are significantly associated with ESR in SJIA but the genes in these pathways show lower expression in samples with higher ESR in comparison to samples with lower ESR. In addition, this analysis showed that genes in B cell activating factor (BAFF) signaling, April (A proliferation-inducing ligand, TNFSF13)-mediated signaling and IL-15 signaling pathways show lower expression samples with in elevated ESR in SJIA (Table ). Comparative analysis of joint count (JC) correlated genes in systemic and arthritis phase (SAF) and arthritis phase (AF) SJIA patients Using the previously identified JC-associated genes (Additional file ), we then investigated whether arthritis-related gene pathways change when the disease phenotype changes from the earlier systemic and arthritic activity/flare (SAF) phase to arthritis-only activity/flare (AF) phase. Shown in Figure , SJIA patients were distributed according to values of JC and systemic scores (Tables and -) to identify SAF and AF subgroups. Figure shows the JC associated genes that are significantly correlated with JC in SAF and AF subgroups. Within the SJIA SAF group, only IL-10 was identified to positively correlate with JC ( P- value 0.026). In contrast, 12 genes were found to significantly correlate (negatively) with JC in AF subgroup (Figure , listed in order of decreasing significance): TRAP1, IL2RG, CD40LG, PARP1, TP53, ATM, NFATC2, GZMA, CASP10, PFKFB3, IRF3 and IRF4. Canonical pathway analysis (Figure ) mapped 8 of the 12 JC-correlated genes to a single network with the Th2 cytokine, IL-4, at its center. These functional relationships suggest that lack of IL-4 may contribute to arthritis in the AF subgroup. The difference in JC-correlated genes/pathways between AF and SAF supports the hypothesis that different biological pathways are engaged in the chronic arthritis stage versus the more acute (or systemic symptom-associated) arthritis of SJIA.
ESR was chosen as a quantitative measure of systemic inflammation for our analysis, as it typically rises in association with flares of systemic symptoms and was assessed in the largest number of samples. We also considered another measure of systemic inflammation, C-reactive protein (CRP), but few samples were assessed for CRP, precluding the use of this parameter in our analysis. The number of affected counts (joint count, JC), as defined above (Introduction), was used as a quantitative measure of arthritis. Our samples were initially classified as flare or quiescence based on criteria that we have developed for analysis of JIA (Tables , and ), as previously published , and ESR and JC are part of these criteria. We performed a distribution analysis of ESR or JC values by disease states (flare/quiescence) using R Epicalc package ( http://cran.r-project.org/web/packages/epicalc/ ) to investigate if additional subgroups would be revealed. Visual inspection of the results show that the SJIA and the POLY flare patients could be partitioned into two groups related to their ESR values (Figure ): F1, with ESR values below 20, and F2, with ESR values above 20. All patients in the F1 subgroup had mild flares by our other criteria (not shown). Quiescence samples all had ESR below 20, clustering together with the F1 flare group. This analysis also showed that, in our samples, JC values in the flare and quiescence disease states are generally non-overlapping in both SJIA and POLY patients, with quiescence samples with 0 or 1 joint count, and all flare samples above zero (Figure ). We analyzed the association of the 181 gene panel with ESR and JC in both SJIA and POLY samples, using the strategies delineated in Figure . Genes whose expression was significantly associated with ESR or JC in SJIA and POLY cohorts were identified in two ways. As described in Figure , Pearson correlation analyses were performed to correlate ESR or JC values with patient expression data sets. To assess the significance of these findings, we calculated the global false discovery rate (gFDR) by 100-fold permutation of normalized kPCR data. After determining the gFDR, local FDR (lFDR) analysis can compute and assign significance measures to all features . A cut-off value of lFDR ≤ 0.05 was used to select significant genes for downstream pathway analysis. We also analyzed gene expression association using Student's t -test, as shown in Figure . For ESR, based on the analysis from Figure , we initially divided our samples into three groups: flare samples with ESR < 20 (F1), flare samples with ESR > 20 (F2), and quiescence (to ensure that differences between F1 and Quiescence were not overlooked). We identified genes whose mean expression value differed significantly between the F1 (ESR < 20) and F2 (ESR > 20) patient groups, but no differences in genes expressed by the F1 and the quiescence group were found. Subsequently, we grouped the flare F1 and the quiescence groups into one group for ESR analysis. For JC, no other partitioning was necessary, as shown in Figure , and samples were grouped into flare and quiescence groups. As we did previously for Pearson analysis, we calculated local FDR and a value of < 0.05 was considered significant (Figure ). This second analysis found genes missed by correlation analysis, as the latter requires a linear relationship and captures genes with more tightly regulated expression (small differences between F and Q samples). Pearson correlation analysis of expression data from SJIA subjects found 79 genes from our panel to be ESR-correlated and 36 genes to be JC-correlated. Student's t -test found 66 ESR-associated and 79 JC-associated genes in SJIA. This pattern differed from relationships of the expression levels of the same genes with these clinical parameters in POLY-course JIA patients: 20 ESR-correlated and no JC-correlated genes were found in POLY, and none of the genes were ESR-associated or JC-associated by Student's t -test in POLY. Combining both analyses, we found 91 ESR-related and 92 JC-related genes in SJIA, and 20 ESR-related and no JC-related genes in POLY. A list of significantly associated genes is on Additional file . Additional file (Supplementary Figure ) diagrams the fold changes in expression of the selected genes between groups (for example, F2 versus F1 + Q) and between quartiles of ESR or JC. The probability density analysis graphically represents the normalized frequency distribution of the fold ratio of the selected two groups. This result indicates that our selected genes have significant variation between groups (limited variation = fold ratio close to 1) while showing strong association. This analysis further supports our approach (Figure ) for the identification of significant associations. The reduced number of genes associated with these clinical parameters in the POLY cohort was not surprising given that the gene list was chosen in large part using expression data from SJIA PBMC. Indeed, this finding implies a degree of specificity of the associated genes for SJIA (see discussion). Another likely contribution to this difference might be the extent that disease-related processes are reflected in peripheral blood in the two disease types.
Using the lists of associated genes, we determined biological pathways associated with each clinical parameter/patient cohort by pathway analysis with the Ingenuity IPA system. ESR-related and JC-related pathways were then compared, to investigate whether the same biological pathways are involved in ESR and JC elevations in SJIA. As shown in Figure , there is strong correlation (Pearson correlation coefficient, 0.91) between SJIA ESR and JC-related pathways (n = 189), implying that some of the same pathways play roles in the systemic and arthritic components of the disease. Shown in Figure , densities of the absolute log P- value differences of all pathways between SJIA ESR and JC, for the original and the 20, 50 and 80 percentile of the simulated random data sets, were computed and plotted. Significantly differentiating pathways between ESR and JC in SJIA revealed by this analysis are in Table (top two pathways), as were pathways that were correlated comparably with both ESR and JC (Table ). The only pathway more significantly related to SJIA ESR than to SJIA JC was the glucocorticoid (GC) receptor signaling pathway. The expression of most of the genes in this pathway was higher in samples with higher ESR compared to samples with lower ESR. In contrast, the PI3K/Akt signaling pathway was more significantly related to SJIA JC. Though the significance of the association favored JC, the expression levels of most genes in this cell survival pathway were higher in samples with higher ESR or higher JC. However, as might be expected, TP53, which encodes p53, a pro-apoptotic, negative regulator of the Akt pathway , was down-regulated in association with JC and ESR elevations. Consistent with these results, we have previously reported that purified monocytes have lower TP53 transcript levels and increased cellular resistance to apoptotic stimuli during SJIA flare compared to quiescence . Overall, the identification of some pathways that are differentially correlated with ESR and JC raises the possibility of differences in aspects of the immunobiology of arthritis compared to systemic inflammation in SJIA, as discussed below. A number of pathways were significantly related to SJIA ESR and JC to the same degree (Table ). For several of these pathways, the expression of most of the associated genes was higher in samples with higher ESR or JC. These pathways include, among others, protein kinase receptor (PKR, a pattern-recognition receptor) signaling in interferon induction, T cell and B cell signaling in the pattern of rheumatoid arthritis (RA), and (macrophage) migration inhibition factor (MIF) regulation of innate immunity. Other genes in pathways associated with activating innate responses, such as lipopolysaccharide (LPS) signaling and triggering receptor expressed on myeloid cells (TREM1) signaling are also higher samples with either higher ESR or JC. Genes in other pathways showed lower expression in samples with higher ESR or JC, such as T helper cell differentiation, iCOS-iCOSL (inducible T-cell co-stimulator/ligand) signaling in T helper cells and CD40 (co-stimulatory molecule on antigen presenting cells) signaling. Notably, these down-regulated pathways are associated with adaptive immune responses. Also down-regulated in association with elevations of both SJIA ESR and JC is the pathway for crosstalk between dendritic cells and natural killer cells, which can be involved in restriction of innate responses . Two genes, the DNA repair enzyme ATM and the transcription factor NFATC2 (also known as NFAT1), are in the pathway for RANK signaling in osteoclasts and are both down-regulated in association with systemic (ESR) and arthritic (JC) disease activity. The Rank/RankL pathway is an important regulator of bone remodeling . An ATM deficiency has been described in CD4+ T cells from rheumatoid arthritis (RA) patients , associated with premature immunosenescence. However, ATM may also be involved in bone formation, and ATM deficient animals show increased numbers of osteoclasts . The transcription factor NFATC2 has been identified as a negative regulator of cartilage cell growth . It is also important in T cell effector function, translocating to the nucleus following T cell receptor activation and regulating expression several cytokines in CD4 T cells (reviewed in ). Thus, its inverse correlation with ESR and JC may be similar to the other T cell-related pathways described above. Interestingly, hyperactivation of NFATC2 in T cells is associated with decreased susceptibility to experimental autoimmune encephalomyelitis, indicating that increased NFATC2 activity may have immunomodulatory effects that down-regulate autoaggressive reactions .
We next asked whether some biological pathways involved in SJIA ESR elevation are also involved in POLY ESR elevation, by comparing SJIA and POLY ESR-related genes. As shown in Figure , there is reduced correlation (correlation coefficient, 0.59) between SJIA and POLY ESR-related pathways (n = 119), compared to the correlation we observed between SJIA ESR- and SJIA JC-related pathways. Shown in Figure , densities of the absolute logarithm P- value differences of all pathways between SJIA and POLY ESR, for the original and the 20, 50 and 80 percentile of the simulated random data sets, were computed and plotted. Several pathways differ significantly between SJIA and POLY ESR, as quantified by absolute difference between the SJIA and POLY pathways (Table ). These include: the role of macrophages, fibroblasts and endothelial cells in RA, IL-10 signaling, glucocorticoid receptor signaling, among others. These data suggest a greater role for these pathways in SJIA, compared to POLY. However, very few genes were associated with POLY ESR in most pathways, resulting in low significance of association with POLY (not shown). In these differentiating pathways, the (few) genes correlating with ESR were higher in POLY samples with higher ESR values, suggesting that these genes, perhaps in the context of other pathways or in the context of the identified pathways but within the joint, contribute to inflammation in polyarticular course JIA. Indeed, evidence from RA, the adult disease most similar to polyarticular JIA, implicates monocyte and macrophage activation and endothelial cell dysfunction , both in joints and in the periphery. As observed in the previous analysis, pathways associated with T cell responses are significantly associated with ESR in SJIA but the genes in these pathways show lower expression in samples with higher ESR in comparison to samples with lower ESR. In addition, this analysis showed that genes in B cell activating factor (BAFF) signaling, April (A proliferation-inducing ligand, TNFSF13)-mediated signaling and IL-15 signaling pathways show lower expression samples with in elevated ESR in SJIA (Table ).
Using the previously identified JC-associated genes (Additional file ), we then investigated whether arthritis-related gene pathways change when the disease phenotype changes from the earlier systemic and arthritic activity/flare (SAF) phase to arthritis-only activity/flare (AF) phase. Shown in Figure , SJIA patients were distributed according to values of JC and systemic scores (Tables and -) to identify SAF and AF subgroups. Figure shows the JC associated genes that are significantly correlated with JC in SAF and AF subgroups. Within the SJIA SAF group, only IL-10 was identified to positively correlate with JC ( P- value 0.026). In contrast, 12 genes were found to significantly correlate (negatively) with JC in AF subgroup (Figure , listed in order of decreasing significance): TRAP1, IL2RG, CD40LG, PARP1, TP53, ATM, NFATC2, GZMA, CASP10, PFKFB3, IRF3 and IRF4. Canonical pathway analysis (Figure ) mapped 8 of the 12 JC-correlated genes to a single network with the Th2 cytokine, IL-4, at its center. These functional relationships suggest that lack of IL-4 may contribute to arthritis in the AF subgroup. The difference in JC-correlated genes/pathways between AF and SAF supports the hypothesis that different biological pathways are engaged in the chronic arthritis stage versus the more acute (or systemic symptom-associated) arthritis of SJIA.
In this study, we sought to identify molecular pathways involved in the systemic and arthritic components of SJIA by investigating the gene expression pathways associated with increases in ESR and active joint count. We chose ESR as a marker of systemic inflammation, but we note that SJIA flares associated with elevated ESR may also include arthritis. Further, SJIA flares with macrophage activation syndrome (MAS) may actually lower ESR from fibrinogen consumption as part of the coagulopathy . The latter issue does not confound our analysis, as the three flare samples with low ESR were from patients with mild flares without MAS. Strictly speaking, our approach delineated gene associations with ESR; however, in our group of SJIA samples, ESR typically correlated closely with other evidence of systemic disease. Several variables that influence transcriptional profiles should be considered in relation to our results. It is possible that some of the observed differences in gene expression are due to differences in cell type composition of PBMC between SJIA and POLY, or between flare and quiescence . Changes in abundance of cell types may be relevant to disease mechanisms. For monocyte-related genes, we and others have shown that the differences in transcript abundance are not explained by differences in monocyte numbers alone, but reflect activation state. The use of medication and disease duration at the time of sampling may influence the pattern of gene expression. A larger, likely multi-center, study will be needed to rigorously control for these important variables. Our analysis revealed overlap in molecular pathways involved in increased ESR and elevated JC in SJIA. This result was not unexpected, given reported correlations between these two parameters . However, the glucocorticoid receptor (GCR) signaling pathway was more significantly related to ESR than JC. Systemic symptoms of SJIA respond to exogenous steroids, suggesting the elevation of GCR signaling may represent an endogenous effort to dampen systemic inflammation. The comparable doses of exogenous steroids in the F and Q groups make it less likely that steroid therapy is inducing this pathway. Notably, polymorphism in the GCR gene is associated with the level of inflammatory activity in JIA . Involvement of GCR signaling in systemic inflammation in SJIA and stronger association of this pathway with inflammation in SJIA versus POLY (at least as reflected in blood cells) is consistent with reduced responses in SJIA patients to non-glucocorticoid drugs that are efficacious in subsets of POLY patients (for example, methotrexate and anti-TNFα ). We also found that the PI3K/Akt signaling pathway is more significantly related to SJIA JC than ESR. This pathway, which is activated by a variety of stimuli, including IL-1β, TNFα and IL-6, is potentially involved in IL-17 production . IL-17 could be an important factor in SJIA arthritis , particularly in the later phase. We did not assess expression of IL-17 in this study, but our preliminary data suggest that CD4+ T cells from SJIA patients secrete higher levels of IL-17 than control cells when cultured in TH17-polarizing conditions [Wong M, Mellins E, unpublished results]. Recently, enrichment of Th17 (and Th1) cells in blood of SJIA patients has been described . Our findings are consistent with the hypothesis that dysregulation of the innate immune system makes a more prominent contribution to SJIA immunopathology than alterations of the adaptive immune system , whereas adaptive responses are thought to drive oligoarticular and polyarticular JIA . However, our results implicate deficiencies in genes associated with T cell-related responses in SJIA pathology, similar to observations in other studies . For example, reduced cytolytic cell activity and diminished function of T regulatory cells may play roles in SJIA etiology . Down-regulated genes associated with cytolytic function also participate in dendritic cell/NK cell and monocyte/NK cell interaction. Some cytolysis genes are part of the IL-15 signaling pathway, and IL-15 is involved in the development of NK cells . In the systemic plus arthritic stage of SJIA, we found that expression of IL-10 in PBMC was positively associated with arthritis. In in vitro studies of SJIA monocytes, we and others observe that IL-10 is expressed after TLR stimulation, and IL-10 signaling is intact [Macaubas et al ., unpublished]. Given the immunosuppressive effect of IL-10, association of this gene with arthritis in SJIA may represent an attempt by the immune system to reduce inflammation. The level of IL-10 may be inadequate to deal with the inflammatory challenge, as the frequency of a promoter allele associated with low IL-10 expression is increased in SJIA patients . We found that LPS-induced production of IL-10 protein in SJIA monocytes is comparable to controls . A striking finding of this study is that deficiency in IL-4-related pathways correlates with JC in the arthritic phase of SJIA. IL-4 has been implicated in protection against arthritis. Polymorphism in the IL4Rα gene that confers reduced responsiveness to IL-4 is associated with worse outcome in RA . Low levels of circulating IL-4 are observed in patients with active POLY . IL-4 has been shown to suppress growth factor-induced proliferation of cultured rheumatoid synovial cells by interfering with the cell cycle and by decreasing cell survival . In the murine model of collagen-induced arthritis, IL-4 is protective against cartilage and bone destruction , and neutralization of IL-4 in the same model results in reversal of arthritis suppression . IL-4 is also protective in the model of proteoglycan-induced arthritis . Interestingly, in proteoglycan-induced arthritis, mice deficient in IL-4Rα showed higher IL-1β, IL-6 and MIP1a, whereas levels of IFNγ and autoantibodies were less affected. These results imply that IL-4 suppresses innate immune activity more than the adaptive system in this arthritis model . This might model the arthritis of late stage SJIA. IL-4 inhibits expression of pro-inflammatory cytokines, such as IL-1β, TNFα and IL-17 . As mentioned, IL-17 is an attractive candidate for a driver of inflammation in the later arthritic phase of SJIA. Th17 cells may become IL-1 independent in SJIA, as seen in an animal model . The IL-1β independence of IL-17 action would be consistent with the decreased efficiency of anti-IL1 therapy in the later arthritic phase of SJIA . The ability of IL-4 to suppress reactivation of committed Th17 cells may be another mechanism by with IL-4 deficiency could contribute to arthritis in SJIA. Finally, in a small, open label study, oral histone deacetylase inhibitors in patients with mean SJIA duration of five years showed significant therapeutic benefit, specifically for arthritis . This finding is consistent with the idea that distinct biology may be involved in later phase arthritis in SJIA. We found no gene association or correlation linked with POLY joint count, and a limited number of somewhat different genes were associated with elevated ESR in POLY-JIA subjects. Our gene panel was largely derived from a SJIA-based microarray study and, as such, it has a significant bias towards SJIA-related genes. Further, the systemic nature of SJIA predicts more changes in peripheral blood than for POLY, where pathology is more localized. Our POLY cohort was itself heterogeneous, including RF+ and RF- patients, which were analyzed as one group. Most gene expression studies have analyzed RF- patients only [ , , ]; some have not determined the RF status . Griffin et al . 2009 showed that RF+ and RF- patients can share a similar gene signature . It will be of interest to determine the cell type within PBMC that is responsible for particular transcripts. Based on correlated expression patterns with more lineage specific genes, it is most likely that IL-4 transcripts derive from CD4 T cells; the IL-4 message expression is correlated with expression of CD40LG and IL2RG (not shown). In contrast, the IL-10 expression correlates with expression of IL-1, IL-1-related genes and IL-6 (not shown), suggesting IL-10 transcripts are expressed in monocytes. Further studies are also needed to determine the specificity of the SJIA gene signature in relation to other acute inflammatory diseases, such as bacterial and viral infections and other rheumatologic pediatric diseases . Nonetheless, our current results add to the growing evidence that different molecular mechanisms distinguish SJIA from other JIA subtypes [ , , , , ].
This study demonstrates that analysis of individual clinical parameters in a complex disease like SJIA may reveal unique and informative molecular associations. In addition to elucidating disease immunopathology, this approach may help identify therapeutic targets and strategies tailored to the different phases of SJIA.
AF: arthritis-predominant phase; AOSD: Adult Onset Still Disease; BAFF: B cell activating factor; CPT: cell preparation tubes; CRP: C-reactive protein; EEF1A1: eukaryotic translation elongation factor 1 alpha1; ESR: erythrocyte sedimentation rate; F: SJIA flare; FDR: false discovery rate; GEO: Gene Expression Omnibus; GC: glucocorticoid; GCR: glucocorticoid receptor; IL: interleukin; JC: joint count; LPS: lipopolysaccharide; MAS: macrophage activation syndrome; PANTHER: Protein ANalysis THrough Evolutionary Relationships) Classification System; PBMC: peripheral blood mononuclear cells; PCR: polymerase chain reaction; PKR: protein kinase receptor; PLT: platelets; POLY: polyarticular juvenile idiopathic arthritis; PPP1CC: protein phosphatase 1, catalytic subunit, gamma isoform; Q, SJIA quiescence; qPCR: quantitative PCR; RA: rheumatoid arthritis; RF: rheumatoid factor; RPL12: ribosomal protein L12; RPL41: ribosomal protein L41; SAF: systemic plus arthritic phase; SAM: Significance Analysis of Microarrays; SJIA: systemic onset juvenile idiopathic arthritis; WBC: white blood count.
The authors declare that they have no competing interests.
XBL performed the statistical analysis and wrote the paper. CM performed statistical analysis, interpreted the data and wrote the paper. HCA, S-YPC and ABB designed and performed the kinetic PCR. QW and EC performed statistical analysis. YS and CD processed patients' samples, prepared RNA, performed and analyzed the microarrays. K-HP, RL, C-JL and SHP performed microarray and initial data analysis. TL and CS provided patients' samples and clinical information. SNC helped design the study and the initial strategy for data analysis. EM contributed to study design, interpreted data and wrote the manuscript. All authors read and approved the final manuscript.
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1741-7015/10/125/prepub
Additional file 1 Supplementary Figure 1 . Unsupervised hierarchical clustering analysis of a subset of differentially expressed genes from SJIA flare and quiescence samples studied by microarray. Paired samples from 14 subjects at flare (red) and quiescence (green) were studied. Independent samples from the same individual are indicated by *. Each column represents a separate sample; each row represents a separate gene. Click here for file Additional file 2 Supplementary Table 1 . The 181 immune-related genes analyzed in this study. Click here for file Additional file 3 Supplementary Table 2 . Annotation of the 181 analyzed genes into different functional categories using PANTHER software. Click here for file Additional file 4 Supplementary Table 3 . SJIA ESR, SJIA JC and POLY ESR related gene lists. Click here for file Additional file 5 Supplementary Figure 2 . The fold changes in expression of the genes that differ between groups and between quartiles of ESR or JC. (A) The probability density of fold changes with the selected ESR-related genes between F1 + Q and F2 of SJIA; (B) The probability density of fold changes with the selected ESR-related genes between quartiles (the first quartile to the third quartile) of SJIA; (C) The probability density of fold changes with the selected JC-related genes between Q and F of SJIA; (D) The probability density of fold changes with the selected JC- related genes between quartiles (the first quartile to the third quartile) of SJIA; (E) The probability density of fold changes with the selected ESR-related genes between F2 and F1 + Q of POLY; (F) The probability density of fold changes with the selected ESR-related genes between quartiles (the first quartile to the third quartile) of POLY. Click here for file
|
Effects of an Interaction and Cognitive Engagement-Based Blended Teaching on Obstetric and Gynecology Nursing Course | 3dcdf40b-d3d8-4e44-8f00-0b4892206363 | 9224235 | Gynaecology[mh] | With the continuous reform and innovation of nursing education in China, it has become an inevitable trend to cultivate the comprehensive ability of nursing students, more and more high-quality classes have been established. Higher education curriculum construction has put forward higher requirements for nursing undergraduate teaching. Online and offline blended high-quality classes have become a mainstream form and the new normal of learning in the information age. However, at present many colleges and universities are in the initial stage of blended teaching, there are still a lot of problems, how to design and implement the blended teaching effectively is a huge challenge for teachers . China’s Ministry of Education to create the “golden course” purpose is to make “students move, the classroom lively”, let students deeply involved in the classroom to cultivate the active learning and in-depth learning habits. However, differences in students’ initiative, participation in learning activities and differences in knowledge understanding in blended teaching make it difficult to achieve the “advanced” goal of “golden course” . How to effectively connect and organically integrate online and offline teaching forms in nursing professional classroom has become an important issue facing nursing curriculum construction. Obstetrics and gynecology (OB-GYN) nursing is a compulsory and core course for nursing students. Its teaching content is abstract, difficult to understand, professional and technical, emphasizing the combination of theoretical knowledge and clinical practice . Teaching environment and teaching plan are the main factors affecting effective teaching of obstetrics and gynecology nursing, and are the effective means to achieve teaching objectives and the key to improve teaching quality . In the literature, it has been reported that the traditional course of OB-GYN nursing focuses on the theoretical teaching and operation demonstration training, ignoring the cultivation of the comprehensive abilities such as self-learning, critical thinking, communication skills, practical application ability, and students used to passively accept knowledge ; However, simple online course learning lacks timely teaching interaction, and students have a poor sense of presence, which is not conducive to practical education of nursing specialty . Therefore, it is particularly important to explore a more effective obstetrics and gynecology nursing teaching model. In recent years, colleges and universities have actively implemented the blended teaching model to achieve personalized education in a variety of ways. In the field of nursing education, the blended teaching can stimulate students’ interest in learning, improve the learning effect and cultivate students’ comprehensive ability . However, other studies have found that there is no statistical difference between the blended teaching group and the traditional teaching group in terms of grades, course satisfaction and independent learning readiness . There are many studies on the application of blended teaching in the field of nursing education, but its design, implementation and evaluation methods are complex and diverse . The simple combination of face-to-face learning and information technology cannot provide effective teaching and learning solutions, and there is still a lack of clear theoretical framework guidance. Connectivist Learning was proposed by Siemens as a means to understand and explore learning in a networked digital age. It explains how learning happens in the era of “Internet +” from a brand-new perspective. According to this theory, knowledge is a network phenomenon, and learning is the establishment of connections and the formation of networks, including neural networks, conceptual networks and external/social networks. The goal of learning is knowledge growth based on creation, that is, knowledge circulation. This theory is the first one to face the complexity of learning. It regards learning itself as a complex system, and “being” is an integral and distributed response to how elements are connected by the perceiver, and knowledge exists in the connection. Due to its forward-looking interpretation of human learning in the future, it has rapidly gained universal attention from the international community and become the highland and forefront of learning theory research . Teaching interaction is the core of connectivist learning and the key to success, in order to apply connectivist learning to the education teaching practice, Wang et al. put forward a framework for interaction and cognitive engagement in connectivist learning. According to the cognitive participation from shallow to deep, the model divides the teaching interaction of connectionism learning into four levels: operation interaction, wayfinding interaction, sensemaking interaction and innovation interaction . Guided by the framework for interaction and cognitive engagement in connectivist learning, this study constructed the online and offline blended teaching model of OB-GYN Nursing, in order to improve students’ autonomous learning, problem-solving skills, and the formation of critical thinking mode, achieve the goal of improving students’ comprehensive quality. At the same time, the study experience of nursing undergraduates in blended teaching was discussed by evaluating its implementation effect, and feedback was collected to provide reference for improving the application of blended teaching in nursing education. Hypothesized benefits were increased competency, self-directed learning level, and improved learning outcomes. Given that this is the first study to apply the framework for interaction and cognitive engagement in connectivist learning to nursing course, this study has the ability to identify potential benefits that could be used in future nursing education studies. 2.1. Study Design The study utilized a randomized controlled trail design to examine nursing students’ comprehensive abilities after applying the blended teaching based on the framework for interaction and cognitive engagement in connectivist learning. The qualitative data was collected to examine the effects of the program. 2.2. Sample and Setting This study was conducted between March and June 2021 in nursing department, Harbin Medical University. OB-GYN nursing is a mandatory class for junior students and there are six classes for nursing undergraduates. We randomly selected students from two classes and randomly allocated them to experimental group (n = 64) or control group (n = 59) by a sealed envelope system with two numbers (1-control; 2-experimental). 2.3. Ethical Statement All participants signed the informed consent form, and this study was approved by the Institutional Review Board at Harbin Medical University, Daqing. All participants were told that they were free to withdraw from the study at any time and for any reason. 2.4. The Interaction and Cognitive Engagement-Based Blended Teaching Program 2.4.1. Theoretical Framework This study takes the framework for interaction and cognitive engagement in connectivist learning as the theoretical basis. The teaching model is constructed according to the four levels of interaction model. ① Operation interaction: the interaction between teachers, students and online resources can be realized in the online and offline teaching environment of blended teaching, so that students can establish contact and form feedback with teachers, classroom activities and online resources. ② Wayfinding interaction: Provide students with all kinds of information and instructions from teachers and teaching assistants before completing blended teaching, so that students can know how to conduct online self-study through online resources, and can clarify the teaching links, evaluation methods and rules of this course. ③ Sensemaking interaction: Through scientific organizational theory and practical teaching, students can reflect, summarize, share and make decisions. Through teaching activities, students can master and apply knowledge, so that they can make correct decisions in specific clinical situations. ④ Innovation interaction: On the basis of systematically mastering knowledge, knowledge points can be consolidated and integrated by creating and resynthesizing knowledge points, and horizontal and vertical knowledge can be closely linked to achieve the purpose of systematically mastering knowledge, and spiral knowledge consolidation and grid knowledge expansion and optimization can be carried out. 2.4.2. The Blended Teaching Process Design Teaching Objectives The teaching objectives of this course focus on connecting with the cultivation of the core competence of nursing specialty, preparing for the nurse qualification examination and the needs of the learners. There is an emphasis on practicality and innovation with the development of the times. The teaching process pay more attention to the relationship between the vertical systematization of knowledge and the horizontal crosswise of knowledge so that students can master knowledge flexibly, moreover, it focuses on the cultivation of students’ critical thinking, promote students’ simple learning to in-depth thinking so that to reflect students’ participation in the blended teaching. Teaching Contents According to the teaching objectives, the new editions textbooks of Obstetrics and Gynecology, which are widely recognized in China Higher Education are used to make full preparation before class. Through collective lesson preparation with clinical obstetrics and gynecology professors and clinical teachers, the key and difficult points of each chapter were optimized and combined. In order to broaden students’ horizons, the new progress of obstetrics and gynecology and nursing was added in the teaching process. This course is divided into four progressive teaching modules to let students master the structured system knowledge, and the spiral teaching module division is shown in . Teaching Mode Construct the “3(P) 2(R) 1(C) three stages” teaching mode. 3(P) includes P repare before class, as well as P resent and P roduce in class. For students, they need to complete the pre-class preparation and group discussion through the online platform (Prepare before class) and give report and complete discussion in groups according to the design of each class ( P resent and P roduce in class). 2(R) includes R eview and R eflect after class. Students need to review and reflect what they have learned after class. 1(C) refers to Concept. Each student should form their own concept map of knowledge to achieve systematic mastery of theoretical knowledge of obstetrics and gynecology nursing. Students are divided into several groups to finish the pre-class preparation and discussion. The group members have a clear division of task. The discussion topics include different diseases cases, critical thinking cases, and humanistic issues with ideological and political elements. Each student is responsible for their roles such as collecting data, sorting out data, making Power Point, drawing concept maps, and reporting offline classes, so they are deeply involved in the whole process of learning knowledge. Teaching Organization Form Through the establishment of teaching classes on the platform, the course knowledge videos, teaching Power Point, quizzes, check-in, online discussion, release and submission of homework can be connected before, during and after class and timely feedback on the relevant data of students’ participation level at each stage. The details of the platform were shown in . 2.5. Control Group To avoid the bias, the theoretical courses and clinical practice were administered to both groups by the same instructors. The students in this group received the usual teaching mode to complete the course tasks. 2.6. Instruments 2.6.1. Final Course Exam The examination paper is set by the instructor who independent of this study, the total score was 100 point which including the objective topic (single choice questions, multiple-choice questions) and the subjective topic (short-answer questions, the medical record analysis topic), the test time of 90 min, The two groups of students received the same examination paper, examination time and marking teacher. 2.6.2. Competency Inventory for Nursing Students, CINS The Chinese version of CINS scale was used to evaluate students’ professional core ability level, the scale has a total of 38 items, 6 dimensions which including basic biomedical science (5 items), the general clinical skills (6 items), critical thinking and reasoning (3 items), caring (5 items), ethics and accountability (14 items), lifelong learning (5 items). The total score ranges from 38 to 266, with higher scores indicating stronger core abilities. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.80. 2.6.3. Self-Directed Learning Instrument for Nursing Students, SDLINS The Chinese SDLINS scale was used to compare the differences of self-directed learning ability between the two groups. The scale has a total of 60 items, which are composed of five dimensions (Awareness, Learning strategies, Learning activities, Evaluation, Interpersonal skills). The score range is 60–300 points, and the higher scores indicate the high level of independent learning ability. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.81. 2.7. Data Collection Before and after the course, questionnaires were sent to the Wechat group of the class in the form of links through the Platform of WenJuanXing to collect data of independent learning ability and core ability level of the two groups of students. Students were assured that their questionnaires would be anonymous and would not affect their grades. 2.8. Data Analyze SPSS24.0 (IBM, Armonk, NY, USA) statistical software was used for data analysis. The descriptive categorical data were analyzed by using the number, percentage, median, minimum-maximum values, while the continuous data were analyzed using the arithmetic mean and standard deviation. Parametric test is used for normal distribution Paired T test was used to compare the score difference between the two groups. p < 0.05 was considered statistically significant. The study utilized a randomized controlled trail design to examine nursing students’ comprehensive abilities after applying the blended teaching based on the framework for interaction and cognitive engagement in connectivist learning. The qualitative data was collected to examine the effects of the program. This study was conducted between March and June 2021 in nursing department, Harbin Medical University. OB-GYN nursing is a mandatory class for junior students and there are six classes for nursing undergraduates. We randomly selected students from two classes and randomly allocated them to experimental group (n = 64) or control group (n = 59) by a sealed envelope system with two numbers (1-control; 2-experimental). All participants signed the informed consent form, and this study was approved by the Institutional Review Board at Harbin Medical University, Daqing. All participants were told that they were free to withdraw from the study at any time and for any reason. 2.4.1. Theoretical Framework This study takes the framework for interaction and cognitive engagement in connectivist learning as the theoretical basis. The teaching model is constructed according to the four levels of interaction model. ① Operation interaction: the interaction between teachers, students and online resources can be realized in the online and offline teaching environment of blended teaching, so that students can establish contact and form feedback with teachers, classroom activities and online resources. ② Wayfinding interaction: Provide students with all kinds of information and instructions from teachers and teaching assistants before completing blended teaching, so that students can know how to conduct online self-study through online resources, and can clarify the teaching links, evaluation methods and rules of this course. ③ Sensemaking interaction: Through scientific organizational theory and practical teaching, students can reflect, summarize, share and make decisions. Through teaching activities, students can master and apply knowledge, so that they can make correct decisions in specific clinical situations. ④ Innovation interaction: On the basis of systematically mastering knowledge, knowledge points can be consolidated and integrated by creating and resynthesizing knowledge points, and horizontal and vertical knowledge can be closely linked to achieve the purpose of systematically mastering knowledge, and spiral knowledge consolidation and grid knowledge expansion and optimization can be carried out. 2.4.2. The Blended Teaching Process Design Teaching Objectives The teaching objectives of this course focus on connecting with the cultivation of the core competence of nursing specialty, preparing for the nurse qualification examination and the needs of the learners. There is an emphasis on practicality and innovation with the development of the times. The teaching process pay more attention to the relationship between the vertical systematization of knowledge and the horizontal crosswise of knowledge so that students can master knowledge flexibly, moreover, it focuses on the cultivation of students’ critical thinking, promote students’ simple learning to in-depth thinking so that to reflect students’ participation in the blended teaching. Teaching Contents According to the teaching objectives, the new editions textbooks of Obstetrics and Gynecology, which are widely recognized in China Higher Education are used to make full preparation before class. Through collective lesson preparation with clinical obstetrics and gynecology professors and clinical teachers, the key and difficult points of each chapter were optimized and combined. In order to broaden students’ horizons, the new progress of obstetrics and gynecology and nursing was added in the teaching process. This course is divided into four progressive teaching modules to let students master the structured system knowledge, and the spiral teaching module division is shown in . Teaching Mode Construct the “3(P) 2(R) 1(C) three stages” teaching mode. 3(P) includes P repare before class, as well as P resent and P roduce in class. For students, they need to complete the pre-class preparation and group discussion through the online platform (Prepare before class) and give report and complete discussion in groups according to the design of each class ( P resent and P roduce in class). 2(R) includes R eview and R eflect after class. Students need to review and reflect what they have learned after class. 1(C) refers to Concept. Each student should form their own concept map of knowledge to achieve systematic mastery of theoretical knowledge of obstetrics and gynecology nursing. Students are divided into several groups to finish the pre-class preparation and discussion. The group members have a clear division of task. The discussion topics include different diseases cases, critical thinking cases, and humanistic issues with ideological and political elements. Each student is responsible for their roles such as collecting data, sorting out data, making Power Point, drawing concept maps, and reporting offline classes, so they are deeply involved in the whole process of learning knowledge. Teaching Organization Form Through the establishment of teaching classes on the platform, the course knowledge videos, teaching Power Point, quizzes, check-in, online discussion, release and submission of homework can be connected before, during and after class and timely feedback on the relevant data of students’ participation level at each stage. The details of the platform were shown in . This study takes the framework for interaction and cognitive engagement in connectivist learning as the theoretical basis. The teaching model is constructed according to the four levels of interaction model. ① Operation interaction: the interaction between teachers, students and online resources can be realized in the online and offline teaching environment of blended teaching, so that students can establish contact and form feedback with teachers, classroom activities and online resources. ② Wayfinding interaction: Provide students with all kinds of information and instructions from teachers and teaching assistants before completing blended teaching, so that students can know how to conduct online self-study through online resources, and can clarify the teaching links, evaluation methods and rules of this course. ③ Sensemaking interaction: Through scientific organizational theory and practical teaching, students can reflect, summarize, share and make decisions. Through teaching activities, students can master and apply knowledge, so that they can make correct decisions in specific clinical situations. ④ Innovation interaction: On the basis of systematically mastering knowledge, knowledge points can be consolidated and integrated by creating and resynthesizing knowledge points, and horizontal and vertical knowledge can be closely linked to achieve the purpose of systematically mastering knowledge, and spiral knowledge consolidation and grid knowledge expansion and optimization can be carried out. Teaching Objectives The teaching objectives of this course focus on connecting with the cultivation of the core competence of nursing specialty, preparing for the nurse qualification examination and the needs of the learners. There is an emphasis on practicality and innovation with the development of the times. The teaching process pay more attention to the relationship between the vertical systematization of knowledge and the horizontal crosswise of knowledge so that students can master knowledge flexibly, moreover, it focuses on the cultivation of students’ critical thinking, promote students’ simple learning to in-depth thinking so that to reflect students’ participation in the blended teaching. Teaching Contents According to the teaching objectives, the new editions textbooks of Obstetrics and Gynecology, which are widely recognized in China Higher Education are used to make full preparation before class. Through collective lesson preparation with clinical obstetrics and gynecology professors and clinical teachers, the key and difficult points of each chapter were optimized and combined. In order to broaden students’ horizons, the new progress of obstetrics and gynecology and nursing was added in the teaching process. This course is divided into four progressive teaching modules to let students master the structured system knowledge, and the spiral teaching module division is shown in . Teaching Mode Construct the “3(P) 2(R) 1(C) three stages” teaching mode. 3(P) includes P repare before class, as well as P resent and P roduce in class. For students, they need to complete the pre-class preparation and group discussion through the online platform (Prepare before class) and give report and complete discussion in groups according to the design of each class ( P resent and P roduce in class). 2(R) includes R eview and R eflect after class. Students need to review and reflect what they have learned after class. 1(C) refers to Concept. Each student should form their own concept map of knowledge to achieve systematic mastery of theoretical knowledge of obstetrics and gynecology nursing. Students are divided into several groups to finish the pre-class preparation and discussion. The group members have a clear division of task. The discussion topics include different diseases cases, critical thinking cases, and humanistic issues with ideological and political elements. Each student is responsible for their roles such as collecting data, sorting out data, making Power Point, drawing concept maps, and reporting offline classes, so they are deeply involved in the whole process of learning knowledge. Teaching Organization Form Through the establishment of teaching classes on the platform, the course knowledge videos, teaching Power Point, quizzes, check-in, online discussion, release and submission of homework can be connected before, during and after class and timely feedback on the relevant data of students’ participation level at each stage. The details of the platform were shown in . The teaching objectives of this course focus on connecting with the cultivation of the core competence of nursing specialty, preparing for the nurse qualification examination and the needs of the learners. There is an emphasis on practicality and innovation with the development of the times. The teaching process pay more attention to the relationship between the vertical systematization of knowledge and the horizontal crosswise of knowledge so that students can master knowledge flexibly, moreover, it focuses on the cultivation of students’ critical thinking, promote students’ simple learning to in-depth thinking so that to reflect students’ participation in the blended teaching. According to the teaching objectives, the new editions textbooks of Obstetrics and Gynecology, which are widely recognized in China Higher Education are used to make full preparation before class. Through collective lesson preparation with clinical obstetrics and gynecology professors and clinical teachers, the key and difficult points of each chapter were optimized and combined. In order to broaden students’ horizons, the new progress of obstetrics and gynecology and nursing was added in the teaching process. This course is divided into four progressive teaching modules to let students master the structured system knowledge, and the spiral teaching module division is shown in . Construct the “3(P) 2(R) 1(C) three stages” teaching mode. 3(P) includes P repare before class, as well as P resent and P roduce in class. For students, they need to complete the pre-class preparation and group discussion through the online platform (Prepare before class) and give report and complete discussion in groups according to the design of each class ( P resent and P roduce in class). 2(R) includes R eview and R eflect after class. Students need to review and reflect what they have learned after class. 1(C) refers to Concept. Each student should form their own concept map of knowledge to achieve systematic mastery of theoretical knowledge of obstetrics and gynecology nursing. Students are divided into several groups to finish the pre-class preparation and discussion. The group members have a clear division of task. The discussion topics include different diseases cases, critical thinking cases, and humanistic issues with ideological and political elements. Each student is responsible for their roles such as collecting data, sorting out data, making Power Point, drawing concept maps, and reporting offline classes, so they are deeply involved in the whole process of learning knowledge. Through the establishment of teaching classes on the platform, the course knowledge videos, teaching Power Point, quizzes, check-in, online discussion, release and submission of homework can be connected before, during and after class and timely feedback on the relevant data of students’ participation level at each stage. The details of the platform were shown in . To avoid the bias, the theoretical courses and clinical practice were administered to both groups by the same instructors. The students in this group received the usual teaching mode to complete the course tasks. 2.6.1. Final Course Exam The examination paper is set by the instructor who independent of this study, the total score was 100 point which including the objective topic (single choice questions, multiple-choice questions) and the subjective topic (short-answer questions, the medical record analysis topic), the test time of 90 min, The two groups of students received the same examination paper, examination time and marking teacher. 2.6.2. Competency Inventory for Nursing Students, CINS The Chinese version of CINS scale was used to evaluate students’ professional core ability level, the scale has a total of 38 items, 6 dimensions which including basic biomedical science (5 items), the general clinical skills (6 items), critical thinking and reasoning (3 items), caring (5 items), ethics and accountability (14 items), lifelong learning (5 items). The total score ranges from 38 to 266, with higher scores indicating stronger core abilities. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.80. 2.6.3. Self-Directed Learning Instrument for Nursing Students, SDLINS The Chinese SDLINS scale was used to compare the differences of self-directed learning ability between the two groups. The scale has a total of 60 items, which are composed of five dimensions (Awareness, Learning strategies, Learning activities, Evaluation, Interpersonal skills). The score range is 60–300 points, and the higher scores indicate the high level of independent learning ability. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.81. The examination paper is set by the instructor who independent of this study, the total score was 100 point which including the objective topic (single choice questions, multiple-choice questions) and the subjective topic (short-answer questions, the medical record analysis topic), the test time of 90 min, The two groups of students received the same examination paper, examination time and marking teacher. The Chinese version of CINS scale was used to evaluate students’ professional core ability level, the scale has a total of 38 items, 6 dimensions which including basic biomedical science (5 items), the general clinical skills (6 items), critical thinking and reasoning (3 items), caring (5 items), ethics and accountability (14 items), lifelong learning (5 items). The total score ranges from 38 to 266, with higher scores indicating stronger core abilities. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.80. The Chinese SDLINS scale was used to compare the differences of self-directed learning ability between the two groups. The scale has a total of 60 items, which are composed of five dimensions (Awareness, Learning strategies, Learning activities, Evaluation, Interpersonal skills). The score range is 60–300 points, and the higher scores indicate the high level of independent learning ability. This scale was widely used among nursing students in China and has good reliability and validity . The Cronbach’s alpha value in this study was found to be 0.81. Before and after the course, questionnaires were sent to the Wechat group of the class in the form of links through the Platform of WenJuanXing to collect data of independent learning ability and core ability level of the two groups of students. Students were assured that their questionnaires would be anonymous and would not affect their grades. SPSS24.0 (IBM, Armonk, NY, USA) statistical software was used for data analysis. The descriptive categorical data were analyzed by using the number, percentage, median, minimum-maximum values, while the continuous data were analyzed using the arithmetic mean and standard deviation. Parametric test is used for normal distribution Paired T test was used to compare the score difference between the two groups. p < 0.05 was considered statistically significant. 3.1. Comparison of Test Scores between Two Groups of Students The theoretical test scores of the control group were 75.43 (5.22), and those of the intervention group were 77.25 (4.53). Paired T test was used to compare the scores of the two groups, and the difference was statistically significant ( t = 2.57; p < 0.05). 3.2. Comparison of the CINS Scale Scores between Two Groups There was no significant difference in the scores of CINS between the two groups before intervention ( p > 0.05). After the course, the overall CINS score and scores of all dimensions in the experimental group were higher than those in the control group, with statistically significant differences between the two groups ( p < 0.05), as shown in . 3.3. Comparison of the SDLINS Scale Scores between Two Groups There was no significant difference in scores of self-directed learning ability between the two groups before intervention ( p > 0.05). The total score and scores of all dimensions of self-directed learning ability of students in the intervention group were higher than those in the control group, with statistically significant differences between the two groups ( p < 0.05), as shown in . The theoretical test scores of the control group were 75.43 (5.22), and those of the intervention group were 77.25 (4.53). Paired T test was used to compare the scores of the two groups, and the difference was statistically significant ( t = 2.57; p < 0.05). There was no significant difference in the scores of CINS between the two groups before intervention ( p > 0.05). After the course, the overall CINS score and scores of all dimensions in the experimental group were higher than those in the control group, with statistically significant differences between the two groups ( p < 0.05), as shown in . There was no significant difference in scores of self-directed learning ability between the two groups before intervention ( p > 0.05). The total score and scores of all dimensions of self-directed learning ability of students in the intervention group were higher than those in the control group, with statistically significant differences between the two groups ( p < 0.05), as shown in . The findings from this study show that the blended teaching method which based on the interaction and cognitive engagement facilitated a constructive opportunity for nursing students to developing a better performance in the OB-GYN learning and strengthen the general clinical skills, critical thinking and reasoning, caring ability, lifelong learning and self-directed learning. With the increasingly mature network information technology widely applied in the field of education, the blended teaching mode based on flipped classroom has been widely used in nursing education. The core of flipped classroom is to move a lot of direct teaching out of the classroom by turning over the traditional classroom, thus freeing precious classroom time for meaningful deep learning. The key to the implementation of flipped classroom is not the pursuit of a stereotyped label, but the effective application of teaching mode according to different teaching situations. Online teaching is an important part of blended teaching, and the connectivist learning theory is a new theoretical foundation for online education to adapt to the development of the times . In the era of “Internet+”, the transformation of educational concept from “knowledge construction” to “knowledge production” reflects the transformation from passive to active knowledge treatment and from constructivism to connectivist . Connectivist understands learning in a broader perspective and cultivates a wide range of learning and adaptability in learners in a dynamic and chaotic environment. Blended teaching is beneficial to cultivate learners’ learning ability. In the process of realizing the goal of “golden course”, it is more necessary for students to have good cognitive engagement . The connectivist interaction model based on cognitive engagement describes the interaction rules and characteristics of learners in the learning environment of connectivist from the perspective of teaching interaction, and reveals the learning process of connectivist. According to the cognitive engagement from surface to deep, the model divides the teaching interaction of connectivist learning into four levels: “operational interaction”, “wayfinding interaction”, “sensemaking interaction” and “innovation interaction”. Lower-level interactions are the basis for higher-level interactions that extend the requirements of lower-level interactions. The 4-level interaction is a networked and nonlinear process, and shows strong recursion. Based on the framework for interaction and cognitive engagement in connectivist learning, this course constructed a three-in-one, three-stage blended teaching model. By using the online and offline learning environment, it provided various opportunities to participate in the course activities, so as to achieve the purpose of “making students active and the class lively”. The teaching mode enabled students to carry out in-depth learning and consolidation of course knowledge in three stages. The experimental group students’ curriculum theory scores were higher than the control group, but also higher than the previous students’ scores, indicating that the blended teaching mode played an important role in consolidating knowledge points. The teaching content was divided into modules and the links and spiral consolidation between the four modules were emphasized in the details of knowledge points, so that students can achieve clinical application and systematic mastery on the basis of memory comprehension and evaluation analysis. This is also consistent with the essence of connectivist learning which is a spiral process of knowledge innovation and network expansion and optimization under the four types of interaction . The results showed that the core competence and self-directed learning ability of the intervention group were significantly higher than those of the control group ( p < 0.05). A number of investigations and studies have shown that nursing students, compared with theoretical reserve, lack of practical accumulation and low critical clinical thinking ability and autonomous learning ability . Therefore, the blended teaching carried out in this study break through the traditional education mode of “subject-centered” and integrates knowledge teaching, ability cultivation and quality improvement, fully reflecting the organic combination of theory and practice, knowledge teaching and ability cultivation. Through the combination of “learned knowledge” and “knowledge that should be learned” with “how to learn knowledge”, students were constantly inspired to find blank spots in their own cognition and dig deeply into their own potential, so as to stimulate the improvement of nursing students’ professional core ability and independent learning ability . This study runs through the idea of “students as the main body, teachers as the leading body”. The online course construction was carried out by the teacher team before class, and real clinical hot issues were added to the online materials and discussion cases in the form of network links, so as to maintain the innovation and modernity of the course content. This mode encouraged students to actively explore hot issues and controversial issues to improve the independent analysis, thinking, practice, questioning, creation and other abilities. By assigning pre-class tasks, students take the initiative to participate in exploration, and cultivate their ability to collect and process information, analyze and solve problems. According to the preview feedback from students, teacher designed class teaching content and applied a variety of teaching methods to inspire the student logical thinking ability and critical thinking ability, for example, in part of the teaching chapters; the industry-university-research collaborative project was integrated. It realizes the transformation from theoretical knowledge to practical application. Take the chapter of perineum care for example, when the students finished the class, they were invited to the perineum care company to guide the maternal, and during the nursing process, some students will find the needs of maternal and to do the research. It effectively improved students’ knowledge application ability and communication skills, and realized the effective combination of theory and practice. The blended teaching has conducted the online discussion, project learning, peer mutual, case discussion and report and other activities . The online discussions and autonomous learning mentioned the highest frequency during the students’ interviews. Contemporary college students are more inclined to the characteristics of meaning learning, by raising questions to trigger thinking and discussion, by completing the project learning way to promote cooperative learning which is conducive to improve students’ independent learning ability . Online self-learning in blended teaching is a test of students’ self-management ability and active learning ability, which will directly affect the involvement and participation of online learning, and the higher involvement rate, the better the learning effect . Therefore, in order to achieve good teaching effect in blended teaching, the quality of online teaching resources should be improved, so as to strengthen students’ pre-class preparation and independent learning ability, and form deep participation in class and after class. As a new teaching mode, there are still many aspects needs to be improved, including the setting of learning content, the diversification of learning forms, the adjustment of course difficulty and the supervision of learning effects. In the study some students reported that as the course progressed, their learning enthusiasm declined, which may be related to the longer course length and the decrease of students’ interest. Therefore, teachers need to pay attention to the change of students’ learning enthusiasm at any time, and take measures to improve students’ learning enthusiasm. Critz and Knight proposed that students’ problems should be detected and intervened early, and teaching methods should be improved in time to ensure students’ enthusiasm in learning. Schlairet proposed that teachers should find appropriate ways to manage students’ learning expectation and process. At the same time, due to the increase of students’ learning autonomy in blended teaching, how to carry out process assessment, accurately reflect students’ learning effect, and how to supervise students’ learning process are all problems that need to be considered in future studies. Although this study was a RCT design there are some limitations to our study. First, the study was conducted with nursing students in one university; the generalizability of the results should be interpreted with caution. Second, only the effects of the competence of nursing students and the self-directed learning ability of nursing students were examined, future studies might consider analyzing the effects on other outcomes. In this study, the blended teaching mode based on the framework for interaction and cognitive engagement in connectivist learning was applied to obstetrics and gynecology nursing course, which achieved good teaching effect, effectively improved the core ability of nursing students, and cultivated the self-directed learning ability of nursing students. This preliminary application results show that the teaching mode is recognized by students and the cognitive participation and teaching satisfaction are high, in future studies, the suitable, efficient, modern teaching mode can be applied to other nursing courses to strengthen the discipline integration and improve the comprehensive ability of nursing students. |
Family Medicine Patients Have Shorter Length of Stay When Cared for on a Family Medicine Inpatient Service | f04a9938-25b9-4154-80a2-c79cc9332a30 | 6487748 | Family Medicine[mh] | Recently, the hospitalist profession marked their 20th anniversary. More than 50 000 physicians in the United States identify as hospitalists, making it the largest subspecialty in internal medicine. , Hospitalists now practice in 75% of hospitals across the United States. The tremendous growth of the specialty was driven by both economic factors and a large pool of internists primarily trained in hospital settings. , - Traditional hospital care by a generalist consists of rounding on hospitalized patients once or twice daily while maintaining an active outpatient practice. Drawbacks of this model include reduced outpatient efficiency due to constant hospital interruption and reduced hospital efficiency because acute changes cannot be acted upon in a timely fashion. Hospitalist care involves a physician spending the vast majority of their time in the hospital caring for inpatients. Hospitalists have been shown to have shorter length of stay (LOS) and cost savings when compared with traditional hospital care while preserving patient satisfaction and quality of care. , , , However, the Achilles heel of hospitalist medicine is discontinuity. Despite current delivery and payment systems favoring the hospitalist style of inpatient care, long-term relationships between patients and their primary care teams continue to be as relevant inside today’s hospitals as they were in 1948 when W. Eugene Smith published his landmark photo essay titled “Country Doctor.” Continuity of care has been associated with less hospitalization, fewer readmissions, and lower costs. - A recent study of hospitalized Medicare patients showed that even though LOS was shorter for hospitalists, those cared for by their primary care physician had lower mortality and were more likely to be discharged home. Despite making up just 6% of hospitals, it is important to study Academic Medical Centers (AMCs), because they account for more than 20% of hospital care. AMCs commonly care for underserved local patients in addition to providing tertiary care not available elsewhere. At many AMCs, hospitalists practice independently on nonteaching services and also act as preceptors on teaching services. The meta-analysis by Rachoin et al found significant heterogeneity suggesting that different hospitalist environments may have very different outcomes. One study found that the academic-preceptor model had shorter LOS than hospitalists. Another study found that a family medicine teaching service had shorter LOS and lower costs than hospitalists. Family physicians have the benefit of continuity of care and knowledge about outpatient services available when caring for their patients in the hospital. Local community patients at AMCs have very different medical needs when compared with tertiary care patients. Because of these differences, it is important to determine the most efficient ways to care for local community inpatient needs at AMCs. We hypothesized that local family medicine patients cared for by a family medicine inpatient service had shorter LOS when compared with those cared for on other general medical services often staffed by hospitalists, familiar with tertiary patient care at an AMC, after adjusting for illness severity and other factors.
To test the hypothesis, a retrospective cohort study of Department of Family Medicine (DFM) patients was conducted comparing LOS between those admitted to the Family Medicine Inpatient (FMI) service and those admitted to other general medical inpatient services. Setting The DFM provides primary care for more than 80 000 community patients at 4 clinical sites and a single skilled care nursing facility. The FMI service admits any DFM patient who requires general inpatient medical care and is not excluded by specific criteria (age <16 years, requiring cardiac monitoring/telemetry). While most admissions to the FMI service come through the emergency department, the FMI service also accepts direct admissions from clinic and transfers from the intensive care unit or other hospital services. Additionally, DFM patients with specific diagnoses are occasionally admitted to gastroenterology or pulmonary medicine subspecialty services whenever they are below their maximum capacity. The FMI service is a teaching service at an academic medical center and is staffed by a family medicine board–certified attending physician who rotates on service for 1 week at a time, a senior family medicine resident (PGY3), a junior family medicine resident (PGY2 or PGY3) taking 24-hour call every third day, and 1 to 2 family medicine interns (PGY1) working a day or night shift. While the FMI service has no maximum census limit, backup processes exist if the morning census exceeds 12. Patients outside the DFM are cared for by Hospital Internal Medicine (HIM) when they require general inpatient medical care. The 12 HIM services at the study institution vary in their primary admission criteria and structure. Four services are resident teaching services staffed by a hospitalist or general internist and residents, 1 is a fellowship service staffed by hospital medicine fellows, 2 are medical services for patients with active hematologic or solid organ malignancies, and the remainder are traditional hospitalist services staffed by a hospitalist and a nurse practitioner or physician assistant. One of the hospitalist services accepts patients requiring telemetry for noncardiac reasons. With the exception of the hematology and oncology services, the HIM services admit all patients requiring general inpatient medical care, including local internal medicine primary care patients, regional patients requiring a higher level of hospital care than available locally, and tertiary referral patients. All the HIM services have maximum census limits and backup procedures exist, including the temporary creation of additional hospitalist services should demand exceed capacity. Occasionally, patients with a DFM primary care provider are inadvertently admitted to an HIM service. This likely happens because primary care clinics have blended teams and cross-departmental scheduling. For example, a DFM patient may be seen for an acute issue by an internal medicine physician and then mistakenly assigned to a HIM service. Additionally, an emergency department physician may erroneously admit DFM patients to a HIM service. Cohort A dataset of all hospitalizations of adult primary care patients at our institution during 2011-2013 was used for this study. Only patients giving consent for retrospective chart review research were included. All general medical patients empaneled to a DFM primary physician at any of 4 clinical sites located in and around Rochester, Minnesota who were discharged from either the FMI service or a HIM service were identified. Of note, surgical patients and patients admitted to subspecialty services such as cardiology, gastroenterology, and pulmonology were not included as they are not considered general medical patients. Postpartum patients admitted to the FMI service (n = 25) were the only exclusion. Study Design Thus, the final retrospective cohort contained all DFM patients admitted to either the FMI service or various HIM services during 2011-2013. Data regarding demographics, admission and discharge services, dates of hospitalizations and emergency department (ED) visits, LOS, dismissal diagnosis, and the Charlson Comorbidity Index were obtained from the electronic health record. The Charlson Comorbidity Index provides a validated method of predicting mortality by weighting various comorbidities such as heart disease, lung disease, diabetes, chronic kidney disease, and malignancy. It represents a measure of patient complexity useful in case-mix adjustment and has also been associated with hospital readmissions and cost of care. , , The study was reviewed and approved by the Mayo Clinic Institutional Review Board. Analysis Two groups of DFM patients were compared, those dismissed from the FMI service and those dismissed from an HIM service. The main dependent variable was total LOS at the study institution. A secondary outcome of any hospital readmission within thirty days of discharge was also examined, matching the Centers for Medicare and Medicaid Services (CMS) metric for hospital readmission. Although methods to adjust for potentially avoidable readmissions exist, we chose to consider any readmission to eliminate subjectivity. Independent variables include age, gender, marital status, dismissal diagnosis, Charlson Comorbidity Index, number of prior hospitalizations, and number of prior ED visits. The International Disease Classification Version 9 (ICD9) code for the final primary dismissal diagnosis was mapped into 1 of 18 first level categories using the Clinical Classification Software (CCS) available from the Agency for Healthcare Research and Quality (AHRQ). The 4 most common major categories, diseases of the circulatory system (CV), diseases of the digestive system (GI), infectious and parasitic diseases (ID), and diseases of the respiratory system (Pulm) were retained and the remaining categories were collapsed into 1 category (Other). All data were abstracted electronically and analyzed using R version 3.02 ( http://www.r-project.org/ ). Group statistics for the various factors and the dependent variable LOS were summarized using frequencies, mean, and standard deviation. Differences were compared using a Wilcoxon rank sum test for numeric data or a Fisher exact test for 2 × 2 categorical data or a chi-square test for n × 2 categorical data with P values less than .05 considered significant. As is typical for data that cannot be negative, we assumed LOS would be highly right skewed, thus necessitating a logarithm transform to yield an approximately normal distribution that preserves the positive value only characteristic for further analysis. Multivariate analysis using linear regression was carried out on the logarithm-transformed LOS variable to adjust for known risk factors with P values less than .05 considered significant. , - Percent change and 95% confidence intervals were calculated for each regression coefficient.
The DFM provides primary care for more than 80 000 community patients at 4 clinical sites and a single skilled care nursing facility. The FMI service admits any DFM patient who requires general inpatient medical care and is not excluded by specific criteria (age <16 years, requiring cardiac monitoring/telemetry). While most admissions to the FMI service come through the emergency department, the FMI service also accepts direct admissions from clinic and transfers from the intensive care unit or other hospital services. Additionally, DFM patients with specific diagnoses are occasionally admitted to gastroenterology or pulmonary medicine subspecialty services whenever they are below their maximum capacity. The FMI service is a teaching service at an academic medical center and is staffed by a family medicine board–certified attending physician who rotates on service for 1 week at a time, a senior family medicine resident (PGY3), a junior family medicine resident (PGY2 or PGY3) taking 24-hour call every third day, and 1 to 2 family medicine interns (PGY1) working a day or night shift. While the FMI service has no maximum census limit, backup processes exist if the morning census exceeds 12. Patients outside the DFM are cared for by Hospital Internal Medicine (HIM) when they require general inpatient medical care. The 12 HIM services at the study institution vary in their primary admission criteria and structure. Four services are resident teaching services staffed by a hospitalist or general internist and residents, 1 is a fellowship service staffed by hospital medicine fellows, 2 are medical services for patients with active hematologic or solid organ malignancies, and the remainder are traditional hospitalist services staffed by a hospitalist and a nurse practitioner or physician assistant. One of the hospitalist services accepts patients requiring telemetry for noncardiac reasons. With the exception of the hematology and oncology services, the HIM services admit all patients requiring general inpatient medical care, including local internal medicine primary care patients, regional patients requiring a higher level of hospital care than available locally, and tertiary referral patients. All the HIM services have maximum census limits and backup procedures exist, including the temporary creation of additional hospitalist services should demand exceed capacity. Occasionally, patients with a DFM primary care provider are inadvertently admitted to an HIM service. This likely happens because primary care clinics have blended teams and cross-departmental scheduling. For example, a DFM patient may be seen for an acute issue by an internal medicine physician and then mistakenly assigned to a HIM service. Additionally, an emergency department physician may erroneously admit DFM patients to a HIM service.
A dataset of all hospitalizations of adult primary care patients at our institution during 2011-2013 was used for this study. Only patients giving consent for retrospective chart review research were included. All general medical patients empaneled to a DFM primary physician at any of 4 clinical sites located in and around Rochester, Minnesota who were discharged from either the FMI service or a HIM service were identified. Of note, surgical patients and patients admitted to subspecialty services such as cardiology, gastroenterology, and pulmonology were not included as they are not considered general medical patients. Postpartum patients admitted to the FMI service (n = 25) were the only exclusion.
Thus, the final retrospective cohort contained all DFM patients admitted to either the FMI service or various HIM services during 2011-2013. Data regarding demographics, admission and discharge services, dates of hospitalizations and emergency department (ED) visits, LOS, dismissal diagnosis, and the Charlson Comorbidity Index were obtained from the electronic health record. The Charlson Comorbidity Index provides a validated method of predicting mortality by weighting various comorbidities such as heart disease, lung disease, diabetes, chronic kidney disease, and malignancy. It represents a measure of patient complexity useful in case-mix adjustment and has also been associated with hospital readmissions and cost of care. , , The study was reviewed and approved by the Mayo Clinic Institutional Review Board.
Two groups of DFM patients were compared, those dismissed from the FMI service and those dismissed from an HIM service. The main dependent variable was total LOS at the study institution. A secondary outcome of any hospital readmission within thirty days of discharge was also examined, matching the Centers for Medicare and Medicaid Services (CMS) metric for hospital readmission. Although methods to adjust for potentially avoidable readmissions exist, we chose to consider any readmission to eliminate subjectivity. Independent variables include age, gender, marital status, dismissal diagnosis, Charlson Comorbidity Index, number of prior hospitalizations, and number of prior ED visits. The International Disease Classification Version 9 (ICD9) code for the final primary dismissal diagnosis was mapped into 1 of 18 first level categories using the Clinical Classification Software (CCS) available from the Agency for Healthcare Research and Quality (AHRQ). The 4 most common major categories, diseases of the circulatory system (CV), diseases of the digestive system (GI), infectious and parasitic diseases (ID), and diseases of the respiratory system (Pulm) were retained and the remaining categories were collapsed into 1 category (Other). All data were abstracted electronically and analyzed using R version 3.02 ( http://www.r-project.org/ ). Group statistics for the various factors and the dependent variable LOS were summarized using frequencies, mean, and standard deviation. Differences were compared using a Wilcoxon rank sum test for numeric data or a Fisher exact test for 2 × 2 categorical data or a chi-square test for n × 2 categorical data with P values less than .05 considered significant. As is typical for data that cannot be negative, we assumed LOS would be highly right skewed, thus necessitating a logarithm transform to yield an approximately normal distribution that preserves the positive value only characteristic for further analysis. Multivariate analysis using linear regression was carried out on the logarithm-transformed LOS variable to adjust for known risk factors with P values less than .05 considered significant. , - Percent change and 95% confidence intervals were calculated for each regression coefficient.
There were 3100 admissions from 2117 unique patients during the study period. The majority of hospitalizations (2626) were dismissed from the FMI service. As expected, the LOS was highly right skewed (see ). A logarithm transform applied to the LOS data yielded an approximately normal distribution for further analysis. As shown in , age, gender, and marital status were not different between the groups. Patients dismissed from a HIM service had a higher Charlson Comorbidity Index (median 3 vs 5, Z = −7.55, P < .001), different distribution of final dismissal diagnoses (χ 2 =29.2, df = 4, P < .001), slightly more hospitalizations in the previous 12 months ( Z = −2.76, P = .006), and were more likely to have been admitted by a different service (19.2% vs 11.9%, P < .001). However, those dismissed from a HIM service had fewer emergency department visits in the previous 6 months ( Z = 2.41, P = .016). Thirty-day readmission rates between FMI and HIM dismissed patients were similar. Median LOS was 0.9 days shorter for those dismissed from the FMI service (median 1.8 vs 2.7, Z = −10.04, P < .01). A multivariate linear regression model for the transformed dependent variable log(LOS) was computed, R 2 = 0.24, F (14, 2931) = 64.4, P < .01. Because the dependent variable is log transformed, in we report percent change in LOS for a 1-unit change in the independent variable by exponentiating the coefficient, subtracting 1, and expressing the result as a percentage. Age decile (β = 0.03, t =3.85, P < .01), male gender (β = 0.08, t =3.09, P < .01), disposition to a location other than home (β = 0.39, t = 7.20, P < .01), Charlson Comorbidity Index (β = 0.03, t = 6.27, P < .01), a final dismissal diagnosis of ID (β = 0.37, t = 7.69, P < .01), previous hospitalizations (β = 0.05, t = 4.24, P < .01), and admission by a different service (β = 0.43, t = 10.5, P < .01) are all associated with longer LOS. Prior ED visits (β = −0.02, t = −4.86, P < .01) were associated with slightly shorter LOS. Dismissal from an HIM service was associated with a 33.1% (95% CI: 23.5%-43.5%) longer LOS after controlling for the covariates.
Although family medicine patients dismissed by a HIM service have a higher Charlson Comorbidity Index, more dismissals to places other than home, and increased prior hospitalizations, LOS remains 33% longer even after controlling for these variables. This runs contrary to some studies comparing the hospitalist model to traditional practice that show hospitalists decrease costs and LOS. , , However, one study of a teaching family medicine service structured similar to the FMI service studied did show shorter LOS compared with the hospitalist model. This may be because in the traditional model, the physician caring for the hospitalized patient often has other significant outpatient duties during the workday whereas in our model they have strictly inpatient duties. Increased awareness of outpatient resources is one possible reason for shorter LOS on the FMI service. The FMI service is covered by staff physicians and residents who rotate on service for a defined period but spend the rest of their time engaged in outpatient practice. Therefore, they may be more aware of outpatient resources and better equipped facilitate an early transition to outpatient care than HIM clinicians. In their outpatient practice, they work closely with the same outpatient nurses and pharmacists who lead care management and anticoagulation management programs that help patients transition from inpatient to outpatient care. Additionally, the FMI service has a dedicated team of inpatient pharmacists, social workers, and nurses who also have extensive knowledge about community and outpatient resources. This facilitates discharge planning, which has been shown to shorten hospital stays and reduce readmissions. A large study of Medicare patients showed primary care physicians were more likely to discharge patients home and had lower posthospitalization mortality. These benefits were ascribed to increased continuity of care. This aligns with our observation that the FMI service was more likely to discharge patients to home, perhaps because increased continuity results in more knowledge about the patient’s sociodemographic condition and support network. Thirty-day readmission rates did not differ between FMI and HIM services suggesting that, similar to other studies, shorter LOS did not increase readmissions. In fact, longer hospital stays have been associated with higher readmission rates, likely due to confounding with illness severity. , While we did not measure outpatient continuity of care in this study, it has been connected to fewer readmissions. Limitations The Charlson Comorbidity Index was higher for patients admitted to HIM services. Hypertension, depression, and skin ulcers/cellulitis are not included in the index but have been found to contribute to the cost of care. Additionally, patients cared for on HIM services had a different admitting service more frequently than FMI patients. This often occurs when unstable patients are initially admitted to the intensive care unit and then transferred to the floor prior to dismissal. Thus, it may represent the fact that patients cared for by HIM had a higher acuity level. Despite controlling for these factors, they may incompletely reflect the patient’s illness severity. Two of the HIM services specialize in the care of patients with active malignancies. DFM patients are sometimes admitted to these services if they require inpatient chemotherapy. These 2 services have longer LOS than other HIM services. While the Charlson Comorbidity Index adjusts for complexity of these patients, we also performed a sub-analysis that excluded the 69 DFM patients admitted to these services. There was no significant change in the multivariate LOS difference, perhaps because the FMI service also cares for many DFM patients with complications of active malignancy. Patients admitted to subspecialty gastroenterology and pulmonology services were excluded from the study. While DFM patients presenting with gastroenterology or pulmonary complaints are not excluded from admission to these services, they are more commonly admitted to the FMI service. demonstrates this with the higher proportion of GI and Pulm final primary diagnoses for the FMI service. These diagnoses were not associated with LOS in the multivariate analysis. Patients requiring telemetry for cardiac diagnoses are admitted or transferred to cardiology and were not included in this study. However, it is notable there was a slightly increased proportion of CV diagnoses among the HIM group. The reasons for this are unclear but HIM may hold on to some cardiac patients that FMI transfers to cardiology. Additionally, one HIM service cares for patient requiring telemetry for noncardiac reasons. Because these patients have a higher level of acuity, they may have longer LOS. The number of patients affected is likely very small (<50) but due to the data recorded, we were not able to identify them. We are unsure if the Charleston Comorbidity Index adequately adjusts for these factors. Our study was conducted at a single academic medical center and the FMI service has a teaching structure. Thus, our results may not generalize to other environments. Additionally, a small number of HIM physicians rotate on their teaching services and have outpatient practices very similar to the FMI service. Given the information recorded, we were unable to discern when one of these physicians was primarily responsible for a patient. However, we would expect that this dilution of the HIM hospitalist service model would actually understate the measured differences. We did not evaluate factors such as nursing ratios or hospitalist workload that have been associated with LOS changes. , However, the hospital infrastructure, daily service census, and nursing unit staffing is very similar between HIM units and the FMI unit. Further study regarding the actual knowledge difference between family physicians and hospitalists regarding outpatient resources available to assist patients with the transition from hospital to home is warranted. However, because such knowledge is highly localized, findings at our institution may not generalize.
The Charlson Comorbidity Index was higher for patients admitted to HIM services. Hypertension, depression, and skin ulcers/cellulitis are not included in the index but have been found to contribute to the cost of care. Additionally, patients cared for on HIM services had a different admitting service more frequently than FMI patients. This often occurs when unstable patients are initially admitted to the intensive care unit and then transferred to the floor prior to dismissal. Thus, it may represent the fact that patients cared for by HIM had a higher acuity level. Despite controlling for these factors, they may incompletely reflect the patient’s illness severity. Two of the HIM services specialize in the care of patients with active malignancies. DFM patients are sometimes admitted to these services if they require inpatient chemotherapy. These 2 services have longer LOS than other HIM services. While the Charlson Comorbidity Index adjusts for complexity of these patients, we also performed a sub-analysis that excluded the 69 DFM patients admitted to these services. There was no significant change in the multivariate LOS difference, perhaps because the FMI service also cares for many DFM patients with complications of active malignancy. Patients admitted to subspecialty gastroenterology and pulmonology services were excluded from the study. While DFM patients presenting with gastroenterology or pulmonary complaints are not excluded from admission to these services, they are more commonly admitted to the FMI service. demonstrates this with the higher proportion of GI and Pulm final primary diagnoses for the FMI service. These diagnoses were not associated with LOS in the multivariate analysis. Patients requiring telemetry for cardiac diagnoses are admitted or transferred to cardiology and were not included in this study. However, it is notable there was a slightly increased proportion of CV diagnoses among the HIM group. The reasons for this are unclear but HIM may hold on to some cardiac patients that FMI transfers to cardiology. Additionally, one HIM service cares for patient requiring telemetry for noncardiac reasons. Because these patients have a higher level of acuity, they may have longer LOS. The number of patients affected is likely very small (<50) but due to the data recorded, we were not able to identify them. We are unsure if the Charleston Comorbidity Index adequately adjusts for these factors. Our study was conducted at a single academic medical center and the FMI service has a teaching structure. Thus, our results may not generalize to other environments. Additionally, a small number of HIM physicians rotate on their teaching services and have outpatient practices very similar to the FMI service. Given the information recorded, we were unable to discern when one of these physicians was primarily responsible for a patient. However, we would expect that this dilution of the HIM hospitalist service model would actually understate the measured differences. We did not evaluate factors such as nursing ratios or hospitalist workload that have been associated with LOS changes. , However, the hospital infrastructure, daily service census, and nursing unit staffing is very similar between HIM units and the FMI unit. Further study regarding the actual knowledge difference between family physicians and hospitalists regarding outpatient resources available to assist patients with the transition from hospital to home is warranted. However, because such knowledge is highly localized, findings at our institution may not generalize.
Local primary care patients at the AMC were safely discharged sooner from the FMI service than HIM services after controlling for covariates. Readmission rates were not different. Continuity of care, more intimate knowledge of outpatient resources available to assist with transitions of care, and potential additional unadjusted complexity of patients on HIM services likely contribute to shorter LOS for FMI patients.
|
Clinical validation of p16/Ki‐67 dual‐stained cytology triage of | 1e2528c0-45c7-468e-8df7-fb5b2e6b04c6 | 9293341 | Anatomy[mh] | INTRODUCTION Molecular testing for human papillomavirus (HPV) is now widely accepted as the preferred approach for cervical cancer screening. A number of countries including Australia, United Kingdom, the Netherlands, Sweden, Denmark and Turkey have phased out Pap cytology (cytology) as the primary cervical cancer screening test and replaced it with primary HPV testing. The biggest challenge to implementing primary HPV screening is managing the large number of women found to have transient HPV infections. In large U.S. cervical cancer screening trials, approximately 14% of women 25 years and older are HPV positive. , , Efficient triage methods are needed to determine which HPV‐positive women are at increased risk of high‐grade cervical cancer precursors or cancer and require colposcopy as opposed to those who need follow‐up with repeat testing or routine screening. Cervical cytology has been used to triage HPV‐positive women but because of its low sensitivity for high‐grade precursors, cytology‐negative women need to be retested at a short interval. HPV16/18 genotyping is also used in some settings for triage due to the elevated risk of high‐grade precursors and invasive cancers associated with these genotypes. Triage with HPV16/18 genotyping alone also has limited sensitivity since only approximately 50% of high‐grade cervical cancer precursors are associated with these genotypes. To address this limitation, HPV16/18 genotyping has been combined with cytological triage of women with the 12 “other” HPV genotypes. However, the limited sensitivity of cytology means that a relevant proportion of women with the 12 “other” genotypes with a negative cytology may have precancer. Testing for the presence of cervical cells showing simultaneous expression of both the cell‐cycle regulator protein p16 and the proliferation‐associated Ki‐67 protein (p16/Ki‐67 dual‐stained cytology, ie, DS) has been shown in multiple studies to provide good specificity while maintaining high sensitivity when used as a triage test for abnormal cytology or positive HPV screening test results. , , , , , , , , , , This manuscript provides the results from the IMproved Primary screening And Colposcopy Triage (IMPACT) trial for the clinical performance of DS for the triage of HPV‐positive women in a large primary HPV screening population in the United States. The clinical performance of DS is compared to triage using HPV16/18 genotyping combined with cervical cytology or cytology alone.
MATERIALS AND METHODS 2.1 Patient enrolment Women aged 25 to 65 years attending routine cervical cancer screening visits at 32 clinical sites offering cervical cancer screening services, including Planned Parenthood clinics, in 16 states across the United States between September 2017 and November 2018 were invited to join the IMPACT trial, as previously described in detail. Subjects willing and able to provide written informed consent were eligible unless they were pregnant, had a known history of ablative or excisional cervical therapy within the past 12 months, known history of hysterectomy or current or planned participation in another cervical cancer screening, treatment or vaccination study. Women were referred to colposcopy and biopsy/endocervical curettage within 12 weeks after enrolment if test results showed abnormal cytology (ie, ASC‐US or worse), a positive HPV test result or combined unsatisfactory cytology and HPV‐negative test results. All study‐related costs including costs for cytology and HPV testing, costs for colposcopy visits and biopsy evaluations, as well as costs for treatment performed according to the study protocol were covered by the sponsor of the trial (Roche). The IMPACT trial consisted of two phases, a baseline (cross‐sectional) and a 1‐year follow‐up phase. Women who met the clinical endpoint (ie, biopsy‐confirmed ≥CIN2 [cervical intraepithelial neoplasia Grade 2] after the baseline colposcopy/biopsy visit) exited the study. Women who did not meet the primary endpoint and/or did not undergo treatment at baseline were invited to participate in the follow‐up phase of the trial. Subjects included in the follow‐up phase underwent an additional round of HPV and cytology testing after 12 months and, analogous to baseline procedures, were referred to colposcopy/biopsy if positive for either of these tests. The flow of the subjects through the baseline and 1‐year follow‐up phases of the IMPACT trial is shown in Figure . 2.2 Test methods Women had one cervical sample collected into a liquid‐based cytology vial (PreservCyt; Hologic Inc, Marlborough, MA) using either spatula/brush or broom‐type collection devices (approximately half of the cohort per device). Specimens were shipped to 1 of 4 central laboratories in the United States participating as clinical laboratory study sites for the trial, and all laboratory testing was performed by these four laboratories. HPV testing using both the cobas 4800 HPV Test and cobas HPV for use on the cobas 6800/8800 Systems (cobas 6800/8800 HPV test; Roche Molecular Systems, Inc, Pleasanton, CA) and cytology testing using the ThinPrep Pap Test (Hologic, Inc) were performed on all women enrolled into the IMPACT trial, according to the respective manufacturer's instructions. The use of both cobas 4800 and 6800/8800 HPV tests (each of them providing separate results for HPV16, HPV18 and the 12 “other” HPV types as a group) on every women allowed for the assessment of the performance of the high‐throughput cobas 6800/8800 HPV test compared to the cobas 4800 HPV test in primary HPV screening, co‐testing with cytology and ASC‐US triage, as described in more detail recently. Furthermore, it allowed us to establish the performance characteristics of DS in triaging women tested positive using either cobas HPV test. Residual cell suspension material in the PreservCyt vials from all women who were referred to colposcopy/biopsy at baseline were tested for the presence of p16/Ki‐67 dual‐stained cervical cells using the CINtec PLUS Cytology kit (Ventana Medical Systems, Inc, Tucson, AZ) on BenchMark ULTRA automated instruments according to the manufacturer's instructions. For the interpretation of p16/Ki‐67 DS slides, at least two cytotechnologists and at least two pathologists from each of the four clinical laboratory sites participated in the review of the slides. Every p16/Ki‐67 DS cytology slide was first interpreted by one cytotechnologist, and the final test result was confirmed by one pathologist. 2.3 Study cohorts For the assessment of the performance of DS in triaging HPV‐positive women, only women with positive HPV test results at baseline were included in the analyses. Results for the analysis of DS and comparators as triage tests for women positive for the cobas 6800/8800 HPV test are reported in the main body of the manuscript, whereas results for the cohort of cobas 4800 HPV positive women are tabulated in the . 2.4 Clinical endpoints Clinical endpoints for the study were biopsy‐confirmed ≥CIN2 (ie, CIN2, CIN3, adenocarcinoma in situ [ACIS] and cervical cancer; primary endpoint) and ≥CIN3 (secondary endpoint). Formalin‐fixed, paraffin‐embedded biopsy tissue specimens were used for preparation of hematoxylin and eosin (H&E)‐stained slides, as well as for p16 immunohistochemical staining using the CINtec Histology kit (Ventana Medical Systems, Inc) according to the manufacturer's instructions. The pathology review result of the respective clinical laboratory was used for clinical management of the patients. For study purposes, all tissue specimens were subjected to a central pathology review (CPR) as previously described in detail. CPR results on H&E with p16‐stained slides added to the review per Lower Anogenital Squamous Terminology (LAST) criteria (but without using HPV16/18‐positive ASC‐US as an inclusion criterion) were used as the primary reference diagnoses for the trial. 2.5 Study objectives and statistical methods Co‐primary objectives for the IMPACT trial were (a) to evaluate the performance of DS for identification of ≥CIN2/≥CIN3 when used to triage HPV‐positive women, stratified by HPV16/18 vs 12 “other” HPV genotypes, and (b) to compare the performance of DS to that of cytology when used to triage 12 “other” HPV‐positive women. Acceptable performance of DS for the first objective required 1‐negative predictive value (NPV) for ≥CIN3 (ie, the risk of ≥CIN3 among DS‐negative women) to be ≤5% for HPV16/18‐positive women. Acceptable performance for the second objective required the same for 12 “other” HPV‐positive women or, if not met, then 1‐NPV for DS be no worse than cytology for the triage of 12 “other” HPV‐positive women. Statistical analyses were performed on the intended use population of HPV‐positive women included in the IMPACT trial using SAS software, version 9.4. CPR results were tabulated by joint distribution of cytology (negative for intraepithelial lesion or malignancy [NILM], ASC‐US, AGC/ASC‐H, low‐grade squamous intraepithelial lesion [LSIL], HSIL/ACIS), HPV (HPV16+, HPV18+, HPV16/18+, 12 other HR‐HPV positive) and DS (DS+, DS−) results. Sensitivity, specificity, positive predictive value (PPV) and NPV, 1‐NPV, positivity rate and number of baseline colposcopies performed per disease case detected (1/PPV) were determined for each clinical endpoint (≥CIN2, ≥CIN3) for each HPV status, assessing triage using either DS or cytology both at baseline and using year‐1 cumulative disease results. These diagnostic measures were also calculated for partial genotyping scenarios, where only 12 “other” HPV‐positive cases were triaged with DS or cytology. Sensitivity, specificity, PPV, NPV (1‐NPV) and positivity rates were reported as both fractions (n/N) and percentages. Two‐sided 95% confidence intervals (CIs) were calculated using (a) Wilson score method for sensitivity and specificity ; (b) score method according to Nam for PPV, NPV, 1‐NPV and 1/PPV ; (c) normal approximation for positivity rate; (d) Wilson score CI based method according to CLSI EP12‐A2 for differences in sensitivity and specificity ; and (e) percentile bootstrap method for differences in predictive values. There were no missing data for DS results, and unknown CPR reference diagnoses were not imputed. In disposition tables, the number of cases with unsatisfactory DS results is shown, and distributions for CPR results are shown for cases with both satisfactory and unsatisfactory DS results to enable assessment of potential bias. A target sample size of 3500 HPV‐positive women was set in order for 95% CIs for 1‐NPV for co‐primary objectives to span ~3.2%. The obtained sample size of 5250 HPV‐positive women resulted in precision greater than planned.
Patient enrolment Women aged 25 to 65 years attending routine cervical cancer screening visits at 32 clinical sites offering cervical cancer screening services, including Planned Parenthood clinics, in 16 states across the United States between September 2017 and November 2018 were invited to join the IMPACT trial, as previously described in detail. Subjects willing and able to provide written informed consent were eligible unless they were pregnant, had a known history of ablative or excisional cervical therapy within the past 12 months, known history of hysterectomy or current or planned participation in another cervical cancer screening, treatment or vaccination study. Women were referred to colposcopy and biopsy/endocervical curettage within 12 weeks after enrolment if test results showed abnormal cytology (ie, ASC‐US or worse), a positive HPV test result or combined unsatisfactory cytology and HPV‐negative test results. All study‐related costs including costs for cytology and HPV testing, costs for colposcopy visits and biopsy evaluations, as well as costs for treatment performed according to the study protocol were covered by the sponsor of the trial (Roche). The IMPACT trial consisted of two phases, a baseline (cross‐sectional) and a 1‐year follow‐up phase. Women who met the clinical endpoint (ie, biopsy‐confirmed ≥CIN2 [cervical intraepithelial neoplasia Grade 2] after the baseline colposcopy/biopsy visit) exited the study. Women who did not meet the primary endpoint and/or did not undergo treatment at baseline were invited to participate in the follow‐up phase of the trial. Subjects included in the follow‐up phase underwent an additional round of HPV and cytology testing after 12 months and, analogous to baseline procedures, were referred to colposcopy/biopsy if positive for either of these tests. The flow of the subjects through the baseline and 1‐year follow‐up phases of the IMPACT trial is shown in Figure .
Test methods Women had one cervical sample collected into a liquid‐based cytology vial (PreservCyt; Hologic Inc, Marlborough, MA) using either spatula/brush or broom‐type collection devices (approximately half of the cohort per device). Specimens were shipped to 1 of 4 central laboratories in the United States participating as clinical laboratory study sites for the trial, and all laboratory testing was performed by these four laboratories. HPV testing using both the cobas 4800 HPV Test and cobas HPV for use on the cobas 6800/8800 Systems (cobas 6800/8800 HPV test; Roche Molecular Systems, Inc, Pleasanton, CA) and cytology testing using the ThinPrep Pap Test (Hologic, Inc) were performed on all women enrolled into the IMPACT trial, according to the respective manufacturer's instructions. The use of both cobas 4800 and 6800/8800 HPV tests (each of them providing separate results for HPV16, HPV18 and the 12 “other” HPV types as a group) on every women allowed for the assessment of the performance of the high‐throughput cobas 6800/8800 HPV test compared to the cobas 4800 HPV test in primary HPV screening, co‐testing with cytology and ASC‐US triage, as described in more detail recently. Furthermore, it allowed us to establish the performance characteristics of DS in triaging women tested positive using either cobas HPV test. Residual cell suspension material in the PreservCyt vials from all women who were referred to colposcopy/biopsy at baseline were tested for the presence of p16/Ki‐67 dual‐stained cervical cells using the CINtec PLUS Cytology kit (Ventana Medical Systems, Inc, Tucson, AZ) on BenchMark ULTRA automated instruments according to the manufacturer's instructions. For the interpretation of p16/Ki‐67 DS slides, at least two cytotechnologists and at least two pathologists from each of the four clinical laboratory sites participated in the review of the slides. Every p16/Ki‐67 DS cytology slide was first interpreted by one cytotechnologist, and the final test result was confirmed by one pathologist.
Study cohorts For the assessment of the performance of DS in triaging HPV‐positive women, only women with positive HPV test results at baseline were included in the analyses. Results for the analysis of DS and comparators as triage tests for women positive for the cobas 6800/8800 HPV test are reported in the main body of the manuscript, whereas results for the cohort of cobas 4800 HPV positive women are tabulated in the .
Clinical endpoints Clinical endpoints for the study were biopsy‐confirmed ≥CIN2 (ie, CIN2, CIN3, adenocarcinoma in situ [ACIS] and cervical cancer; primary endpoint) and ≥CIN3 (secondary endpoint). Formalin‐fixed, paraffin‐embedded biopsy tissue specimens were used for preparation of hematoxylin and eosin (H&E)‐stained slides, as well as for p16 immunohistochemical staining using the CINtec Histology kit (Ventana Medical Systems, Inc) according to the manufacturer's instructions. The pathology review result of the respective clinical laboratory was used for clinical management of the patients. For study purposes, all tissue specimens were subjected to a central pathology review (CPR) as previously described in detail. CPR results on H&E with p16‐stained slides added to the review per Lower Anogenital Squamous Terminology (LAST) criteria (but without using HPV16/18‐positive ASC‐US as an inclusion criterion) were used as the primary reference diagnoses for the trial.
Study objectives and statistical methods Co‐primary objectives for the IMPACT trial were (a) to evaluate the performance of DS for identification of ≥CIN2/≥CIN3 when used to triage HPV‐positive women, stratified by HPV16/18 vs 12 “other” HPV genotypes, and (b) to compare the performance of DS to that of cytology when used to triage 12 “other” HPV‐positive women. Acceptable performance of DS for the first objective required 1‐negative predictive value (NPV) for ≥CIN3 (ie, the risk of ≥CIN3 among DS‐negative women) to be ≤5% for HPV16/18‐positive women. Acceptable performance for the second objective required the same for 12 “other” HPV‐positive women or, if not met, then 1‐NPV for DS be no worse than cytology for the triage of 12 “other” HPV‐positive women. Statistical analyses were performed on the intended use population of HPV‐positive women included in the IMPACT trial using SAS software, version 9.4. CPR results were tabulated by joint distribution of cytology (negative for intraepithelial lesion or malignancy [NILM], ASC‐US, AGC/ASC‐H, low‐grade squamous intraepithelial lesion [LSIL], HSIL/ACIS), HPV (HPV16+, HPV18+, HPV16/18+, 12 other HR‐HPV positive) and DS (DS+, DS−) results. Sensitivity, specificity, positive predictive value (PPV) and NPV, 1‐NPV, positivity rate and number of baseline colposcopies performed per disease case detected (1/PPV) were determined for each clinical endpoint (≥CIN2, ≥CIN3) for each HPV status, assessing triage using either DS or cytology both at baseline and using year‐1 cumulative disease results. These diagnostic measures were also calculated for partial genotyping scenarios, where only 12 “other” HPV‐positive cases were triaged with DS or cytology. Sensitivity, specificity, PPV, NPV (1‐NPV) and positivity rates were reported as both fractions (n/N) and percentages. Two‐sided 95% confidence intervals (CIs) were calculated using (a) Wilson score method for sensitivity and specificity ; (b) score method according to Nam for PPV, NPV, 1‐NPV and 1/PPV ; (c) normal approximation for positivity rate; (d) Wilson score CI based method according to CLSI EP12‐A2 for differences in sensitivity and specificity ; and (e) percentile bootstrap method for differences in predictive values. There were no missing data for DS results, and unknown CPR reference diagnoses were not imputed. In disposition tables, the number of cases with unsatisfactory DS results is shown, and distributions for CPR results are shown for cases with both satisfactory and unsatisfactory DS results to enable assessment of potential bias. A target sample size of 3500 HPV‐positive women was set in order for 95% CIs for 1‐NPV for co‐primary objectives to span ~3.2%. The obtained sample size of 5250 HPV‐positive women resulted in precision greater than planned.
RESULTS 3.1 Study and analysis populations A total of 5250 women with positive cobas 6800/8800 HPV test results at baseline were included in this analysis. For this cohort, the mean age at enrolment was 37.1 years (SD: 10.3), and median age was 34.0 years (range, 25.0‐65.0). The proportion of women aged 25 to 29 years was 29.9% (1568/5250). These and further study population characteristics and descriptive statistics are provided in Table . Figure shows the analysis population of 4927 women with valid DS results and histologic endpoints at baseline as well as cumulative year‐1 follow‐up numbers. 3.2 DS positivity by cytology and biopsy results Within the cobas 6800/8800 HPV‐positive study population with valid DS results, 536 women with ≥CIN2 were diagnosed at baseline and 632 women were diagnosed with ≥CIN2 cumulatively at baseline and/or year‐1 (Figure ) . provides the CONSORT diagram for cobas 4800 HPV‐positive women. In all, 2382 (48.3%) HPV‐positive women were positive for DS at baseline. DS positivity rates were 33.3% (1030/3090) in HPV‐positive women with cytologic NILM, and 62.8% (510/812), 79.0% (575/728), 90.8% (129/142) and 96.6% (114/118) in women with ASC‐US (atypical squamous cells of undetermined significance), LSIL, AGC/ASC‐H (atypical glandular cells/atypical squamous cells ‐ cannot rule out HSIL) and HSIL/ACIS (high‐grade squamous intraepithelial lesion/ACIS) cytology results, respectively (Table ). Furthermore, DS positivity rates increased from 39.5% (1191/3017) of biopsy results (baseline CPR results) categorized as histological NILM to 69.3% (248/358) in CIN1, 83.7% (257/307) in CIN2, 89.0% (187/210) in CIN3, 91.7% (11/12) in ACIS and 100% (7/7) in invasive cervical cancer (Table ). DS results by baseline cytology and cumulative 1‐year histology diagnoses are shown in . provide these results for cobas 4800 HPV‐positive women. 3.3 DS performance in HPV16 /18‐positive and 12 “other” HPV ‐positive women The performance of DS was assessed for the identification of high‐grade cervical disease (≥CIN2; ≥CIN3) when used to triage women aged 25 to 65 years with positive primary screening HPV test results, stratified by HPV16/18 vs 12 “other” genotype groups (Table ). In HPV16/18‐positive women, DS sensitivity for ≥CIN2 and ≥CIN3 at baseline was 91.2% and 91.9%, respectively, and specificity was 59.1% for ≥CIN2 and 54.8% for ≥CIN3. PPV of DS positivity was high in HPV16/18‐positive women, reaching 35.1% for ≥CIN2 and 21.2% for ≥CIN3 at baseline. The risk of disease in HPV16/18‐positive, DS‐negative women (1‐NPV) for ≥CIN3 at baseline were 1.9%, meeting one of the prespecified acceptance criteria for the co‐primary study objective of the trial (1‐NPV: ≤5.0% for ≥CIN3). Overall, similar sensitivity, specificity, PPV and 1‐NPV estimates were observed for cumulative vs baseline disease endpoints, that is, for ≥CIN2 (≥CIN3) detected at baseline and/or after the 1‐year follow‐up (Table ). In 12 “other” HPV‐positive women, sensitivity of DS for ≥CIN2 and ≥CIN3 at baseline was 83.0% and 86.0%, respectively, significantly higher as compared to the respective sensitivity estimates of cytology: 58.8% for ≥CIN2 and 66.7% for ≥CIN3 (Table ). DS showed lower specificity but similar to slightly higher PPV for ≥CIN2 as compared to cytology in the triage of 12 “other” HPV‐positive women. However, the rate of disease in test negatives (1‐NPV) for ≥CIN2 was significantly lower in DS negative women (3.6%) compared to cytology negative, 12 “other” HPV‐positive women (7.4%; P < .0001), cutting the number to less than half (Table ). provides these results for cobas 4800 HPV‐positive women. 3.4 DS vs cytology, alone or combined with HPV16 /18 genotyping, for detecting high‐grade CIN DS alone showed a significantly higher sensitivity for the detection of ≥CIN2 in HPV‐positive women at baseline than cytology combined with HPV16/18 genotyping (86.5% vs 76.4%; P < .0001) or cytology alone (65.9%; P < .0001) (Table ). Similar results were observed at the ≥CIN3 disease threshold, and for cumulative year‐1 data. Specificity of DS alone was significantly higher than specificity of HPV16/18 genotyping combined with cytology (for ≥CIN2 at baseline, 57.5% vs 47.2%; P < .0001), but significantly lower than observed for cytology alone (66.8%; P < .0001). Of note, triage with DS alone would have referred significantly fewer women to colposcopy than HPV16/18 genotyping with cytology triage for 12 “other” HPV‐positive women (48.6% vs 56.0%; P < .0001), leading to significantly higher efficiency as shown by the lower number of 4.09 vs 5.35; P < .0001) colposcopies to be performed per ≥CIN2 detected (Table ). Adding HPV16/18 genotyping to DS provided the highest sensitivity (90.2% for ≥CIN2 and 94.3% for ≥CIN3 at baseline), however, at the cost of a substantially lower specificity compared to DS alone (Table ). provides these results for cobas 4800 HPV‐positive women. 3.5 Risk of high‐grade CIN in HPV ‐positive women with positive or negative triage test results The risk of ≥CIN2 and ≥CIN3 among HPV‐positive women for the various triage strategies using DS or cytology, either combined with HPV16/18 genotyping or alone, is provided in Table and graphically presented in Figure for ≥CIN3. Results for cobas 4800 HPV‐positive women are provided in and . HPV‐positive women with negative DS test results showed a very low cumulative 1‐year risk for disease (1‐NPV for ≥CIN3: 1.4%), significantly lower than the respective risks when using cytology with HPV16/18 genotyping (2.3%; P = .0181), or cytology alone (3.1%; P < .0001) (Table ). A similar level of reduction of the cumulative 1‐year risk for disease was observed at the ≥CIN2 threshold, that is, 1‐NPV of 4.8% for DS vs 8.9% and 9.2% for cytology combined with HPV16/18 genotyping and cytology alone, respectively. DS provided a better risk stratification than cytology combined with HPV16/18 genotyping, identifying a larger number of women with very low risk for ≥CIN3 (2029 DS negative women; 51.5%) as compared to combined cytology/HPV16/18 genotyping (1732 women with NILM/12 “other” HPV‐positive results; 44.0%), whereas less women would be referred to colposcopy (1887 vs 2177). A DS negative result consistently showed the lowest risk for ≥CIN3 across all triage strategies. Women with HPV16/18‐positive and DS positive results had the highest risk for ≥CIN3, whereas the risk was lowest in women with 12 “other” HPV‐positive women with negative DS results. Of note, the risk for ≥CIN3 was similar in HPV16/18‐positive women with negative DS results as in 12 “other” HPV‐positive women with NILM (Figure ).
Study and analysis populations A total of 5250 women with positive cobas 6800/8800 HPV test results at baseline were included in this analysis. For this cohort, the mean age at enrolment was 37.1 years (SD: 10.3), and median age was 34.0 years (range, 25.0‐65.0). The proportion of women aged 25 to 29 years was 29.9% (1568/5250). These and further study population characteristics and descriptive statistics are provided in Table . Figure shows the analysis population of 4927 women with valid DS results and histologic endpoints at baseline as well as cumulative year‐1 follow‐up numbers.
DS positivity by cytology and biopsy results Within the cobas 6800/8800 HPV‐positive study population with valid DS results, 536 women with ≥CIN2 were diagnosed at baseline and 632 women were diagnosed with ≥CIN2 cumulatively at baseline and/or year‐1 (Figure ) . provides the CONSORT diagram for cobas 4800 HPV‐positive women. In all, 2382 (48.3%) HPV‐positive women were positive for DS at baseline. DS positivity rates were 33.3% (1030/3090) in HPV‐positive women with cytologic NILM, and 62.8% (510/812), 79.0% (575/728), 90.8% (129/142) and 96.6% (114/118) in women with ASC‐US (atypical squamous cells of undetermined significance), LSIL, AGC/ASC‐H (atypical glandular cells/atypical squamous cells ‐ cannot rule out HSIL) and HSIL/ACIS (high‐grade squamous intraepithelial lesion/ACIS) cytology results, respectively (Table ). Furthermore, DS positivity rates increased from 39.5% (1191/3017) of biopsy results (baseline CPR results) categorized as histological NILM to 69.3% (248/358) in CIN1, 83.7% (257/307) in CIN2, 89.0% (187/210) in CIN3, 91.7% (11/12) in ACIS and 100% (7/7) in invasive cervical cancer (Table ). DS results by baseline cytology and cumulative 1‐year histology diagnoses are shown in . provide these results for cobas 4800 HPV‐positive women.
DS performance in HPV16 /18‐positive and 12 “other” HPV ‐positive women The performance of DS was assessed for the identification of high‐grade cervical disease (≥CIN2; ≥CIN3) when used to triage women aged 25 to 65 years with positive primary screening HPV test results, stratified by HPV16/18 vs 12 “other” genotype groups (Table ). In HPV16/18‐positive women, DS sensitivity for ≥CIN2 and ≥CIN3 at baseline was 91.2% and 91.9%, respectively, and specificity was 59.1% for ≥CIN2 and 54.8% for ≥CIN3. PPV of DS positivity was high in HPV16/18‐positive women, reaching 35.1% for ≥CIN2 and 21.2% for ≥CIN3 at baseline. The risk of disease in HPV16/18‐positive, DS‐negative women (1‐NPV) for ≥CIN3 at baseline were 1.9%, meeting one of the prespecified acceptance criteria for the co‐primary study objective of the trial (1‐NPV: ≤5.0% for ≥CIN3). Overall, similar sensitivity, specificity, PPV and 1‐NPV estimates were observed for cumulative vs baseline disease endpoints, that is, for ≥CIN2 (≥CIN3) detected at baseline and/or after the 1‐year follow‐up (Table ). In 12 “other” HPV‐positive women, sensitivity of DS for ≥CIN2 and ≥CIN3 at baseline was 83.0% and 86.0%, respectively, significantly higher as compared to the respective sensitivity estimates of cytology: 58.8% for ≥CIN2 and 66.7% for ≥CIN3 (Table ). DS showed lower specificity but similar to slightly higher PPV for ≥CIN2 as compared to cytology in the triage of 12 “other” HPV‐positive women. However, the rate of disease in test negatives (1‐NPV) for ≥CIN2 was significantly lower in DS negative women (3.6%) compared to cytology negative, 12 “other” HPV‐positive women (7.4%; P < .0001), cutting the number to less than half (Table ). provides these results for cobas 4800 HPV‐positive women.
DS vs cytology, alone or combined with HPV16 /18 genotyping, for detecting high‐grade CIN DS alone showed a significantly higher sensitivity for the detection of ≥CIN2 in HPV‐positive women at baseline than cytology combined with HPV16/18 genotyping (86.5% vs 76.4%; P < .0001) or cytology alone (65.9%; P < .0001) (Table ). Similar results were observed at the ≥CIN3 disease threshold, and for cumulative year‐1 data. Specificity of DS alone was significantly higher than specificity of HPV16/18 genotyping combined with cytology (for ≥CIN2 at baseline, 57.5% vs 47.2%; P < .0001), but significantly lower than observed for cytology alone (66.8%; P < .0001). Of note, triage with DS alone would have referred significantly fewer women to colposcopy than HPV16/18 genotyping with cytology triage for 12 “other” HPV‐positive women (48.6% vs 56.0%; P < .0001), leading to significantly higher efficiency as shown by the lower number of 4.09 vs 5.35; P < .0001) colposcopies to be performed per ≥CIN2 detected (Table ). Adding HPV16/18 genotyping to DS provided the highest sensitivity (90.2% for ≥CIN2 and 94.3% for ≥CIN3 at baseline), however, at the cost of a substantially lower specificity compared to DS alone (Table ). provides these results for cobas 4800 HPV‐positive women.
Risk of high‐grade CIN in HPV ‐positive women with positive or negative triage test results The risk of ≥CIN2 and ≥CIN3 among HPV‐positive women for the various triage strategies using DS or cytology, either combined with HPV16/18 genotyping or alone, is provided in Table and graphically presented in Figure for ≥CIN3. Results for cobas 4800 HPV‐positive women are provided in and . HPV‐positive women with negative DS test results showed a very low cumulative 1‐year risk for disease (1‐NPV for ≥CIN3: 1.4%), significantly lower than the respective risks when using cytology with HPV16/18 genotyping (2.3%; P = .0181), or cytology alone (3.1%; P < .0001) (Table ). A similar level of reduction of the cumulative 1‐year risk for disease was observed at the ≥CIN2 threshold, that is, 1‐NPV of 4.8% for DS vs 8.9% and 9.2% for cytology combined with HPV16/18 genotyping and cytology alone, respectively. DS provided a better risk stratification than cytology combined with HPV16/18 genotyping, identifying a larger number of women with very low risk for ≥CIN3 (2029 DS negative women; 51.5%) as compared to combined cytology/HPV16/18 genotyping (1732 women with NILM/12 “other” HPV‐positive results; 44.0%), whereas less women would be referred to colposcopy (1887 vs 2177). A DS negative result consistently showed the lowest risk for ≥CIN3 across all triage strategies. Women with HPV16/18‐positive and DS positive results had the highest risk for ≥CIN3, whereas the risk was lowest in women with 12 “other” HPV‐positive women with negative DS results. Of note, the risk for ≥CIN3 was similar in HPV16/18‐positive women with negative DS results as in 12 “other” HPV‐positive women with NILM (Figure ).
DISCUSSION Many countries have either transitioned or are in the process of transitioning from cytology‐based cervical cancer screening to primary HPV screening. Primary HPV screening has a high sensitivity for detecting ≥CIN3 lesions, but has a low specificity, especially in young women who frequently have transient HPV infections. Therefore, additional triage is needed to identify HPV‐positive women at greatest risk for ≥CIN3. One approach that has been endorsed by various professional societies and used in the United States since 2014 is HPV16/18 genotyping with cytology triage of 12 “other” HPV‐positive women. , Another promising approach is DS cytology. DS has been previously shown to provide high sensitivity and specificity when used for cervical cancer screening, as a triage of women with equivocal or mildly abnormal cervical cytology and as a triage of HPV‐positive women. , , One of the main objectives of the IMPACT trial was to evaluate the clinical performance of DS as a triage for HPV‐positive women undergoing primary HPV screening either by itself or in combination with HPV16/18 genotyping. DS provided both high sensitivity and good specificity for the detection of either ≥CIN2 or ≥CIN3 in HPV‐positive women. Replacing cytology with DS as the triage for women with 12 “other” genotypes in the current primary HPV screening algorithm which includes HPV16/18 genotyping resulted in a significant increase in sensitivity for ≥CIN3 and a modest reduction in specificity. Although the colposcopy rate at baseline using DS triage increased from 56.0% to 63.3%, because more cases of ≥CIN3 were detected using DS, the number of colposcopies needed to detect a single case of ≥CIN3 was similar (10.99 vs 11.39, respectively). Similar to the results seen in women with 12 “other” genotypes, DS‐negative, HPV16/18‐positive women also had a lower risk of ≥CIN3 than those who were cytology negative, HPV16/18 positive. Similar performance estimates were observed for cross‐sectional data analysis using the baseline colposcopy data and after a 1‐year follow‐up period, and DS triage met the prespecified primary study objectives of the IMPACT trial. The comparative performance of DS vs cytology in the current trial is similar to what was previously reported from the ATHENA study, but differs from what was reported in a 3‐year follow‐up study of HPV‐positive women from Kaiser Permanente Northern California (KPNC). , , In the ATHENA study, replacing cytology with DS as the triage for women with 12 “other” genotypes in the algorithm with HPV16/18 genotyping resulted in a significant increase in sensitivity for ≥CIN3 detected at baseline (86.8% and 78.2%, respectively) but similar specificities (57.4% and 57.6%, respectively). In contrast, the KPNC study found no significant difference in sensitivity for ≥CIN3 when HPV16/18 genotyping with cytology triage of 12 “other” genotypes was used compared to HPV16/18 genotyping with DS triage of 12 “other” genotypes (92.8% vs 92.4%, respectively). Triage of 12 “other” genotypes with DS also had a significantly higher specificity (46.5%) compared to cytology (36.1%). There are several differences between the KPNC study and IMPACT that could potentially explain why the results differ. One is that HPV‐positive women in the KPNC study were managed according to standard clinical guidelines. Women with negative cytology underwent repeat co‐testing at 1 year, irrespective of HPV genotype, and women only received colposcopy if the repeat test was positive. Another difference is that cytology in the KPNC study had an especially high sensitivity and a low specificity. Furthermore, two HPV tests were utilized in IMPACT, and women positive on either HPV test or cytology were referred to colposcopy. In the KPNC study, the sensitivity of cytology for ≥CIN3 (3 years, cumulative) in HPV‐positive women, irrespective of genotype, was 84.3% and specificity was 42.9%. In IMPACT, the sensitivity of cytology for ≥CIN3 (1 year, cumulative) in HPV‐positive women, irrespective of genotype, was 71.3% and specificity was 64.9%. The ATHENA study had a similar study design as IMPACT and the sensitivity of cytology for ≥CIN3 (baseline) in HPV‐positive women, irrespective of genotype, was 52.8% and specificity was 64.9%. It is important to note that in contrast to the variable performance of cytology in the KPNC study and IMPACT, the performance of DS in the two studies was highly consistent. The sensitivity for ≥CIN3 of the algorithm incorporating HPV16/18 genotyping with DS triage of 12 “other” genotypes was 94.3% (1 year, cumulative) in IMPACT and 92.4% (3 years, cumulative) in KPNC. Since the risk of ≥CIN2 or ≥CIN3 in DS‐negative, HPV‐positive women was low, irrespective of HPV genotype, we evaluated the performance of DS as a stand‐alone triage test for HPV‐positive women. DS used as the sole triage tool for HPV‐positive women provided significantly better sensitivity and specificity than the current algorithm of HPV16/18 genotyping and cytology triage of 12 “other” genotypes. When DS is used alone to triage HPV‐positive women the cumulative risk (1‐NPV) of ≥CIN3 in triage‐negative women was only 1.4% compared to 2.3% in women who were triage negative using the algorithm of HPV16/18 genotyping with cytology triage of 12 “other” genotypes. Triaging HPV‐positive women with DS alone would have referred a significantly lower number of women to colposcopy vs HPV16/18 genotyping with cytology triage of 12 “other” genotypes (48.6% vs 56.0%, respectively; P < .0001). Similar findings were found in the 3‐year KPNC study. The 3‐year risk of ≥CIN3 in HPV‐positive, DS‐negative women was 1.7% compared to 1.4% in triage‐negative women using HPV16/18 genotyping and cytology for 12 “other” genotypes. Another KPNC study evaluated the long‐term reassurance that a negative DS result provides in HPV‐positive women. DS‐negative, HPV‐positive women had a lower 5‐year risk of ≥CIN2 than cytology‐negative, HPV‐positive women. Even after 5 years, the risk of ≥CIN3 remained below KPNC's colposcopy referral threshold. Since risk of ≥CIN3 was consistently lowest whenever DS was negative, irrespective of HPV genotype, DS generally provides the best risk stratification for HPV‐positive women. Current American Society for Colposcopy and Cervical Pathology (ASCCP) guidelines take a risk‐centered approach to patient management based on a woman's risk of ≥CIN3. , A key risk threshold is ≥4% immediate risk of ≥CIN3 which is the risk level at which women are referred to colposcopy. Women at lower risk of ≥CIN3 can undergo either 12‐month follow‐up or interval screening. Irrespective of whether HPV‐positive women are triaged using an algorithm incorporating HPV16/18 genotyping and DS for 12 “other” genotypes or triaged using DS alone, the risk of ≥CIN3 in DS triage‐negative women is considerably less than 4%. Even in HPV16/18‐positive women the risk of ≥CIN3 does not meet the colposcopy referral threshold if they are DS negative. The length of follow‐up of the current study was limited to 1 year. However, two other studies from KPNC have reported similar results with up to 5 years of follow‐up. , Another activity that utilizes considerable screening resources is retesting triage‐negative women at 12 months. Risk cutoffs for returning women to routine screening vary. The ASCCP recommends that only women with ≤0.55% 5‐year risk of ≥CIN3 return to routine screening. , At this risk threshold, none of the HPV‐positive women in our study could return to routine screening regardless of triage approach. However, the two KPNC studies of DS used a different risk threshold. , Their threshold was the risk of ≥CIN3 in HPV‐positive women with negative cytology. The 1‐year risk of ≥CIN3 in these women was 2.8%. Using the KPNC risk threshold, women with 12 “other” genotypes who are cytology or DS negative as well as DS negative, HPV positive (without genotyping) in the KPNC studies and IMPACT could return to routine screening. Our study has several strengths and limitations. Strengths include that IMPACT was a large prospective study that enrolled women in 32 clinical centers and assessed DS test performance in 4 central laboratories that also performed the cytology and HPV testing. Disease ascertainment was maximized by referring all women who tested positive with either of two HPV tests or who had an abnormal cytology to colposcopy. Women who fulfilled the initial colposcopy referral criteria were followed‐up at 1 year, and those who were either HPV or cytology positive at 1 year were referred for another colposcopy. To eliminate potential study bias, colposcopy was performed blinded to all test results and a nontargeted biopsy was collected when no lesion was identified at colposcopy. A CPR was performed on both H&E and H&E + p16‐stained biopsy specimens. Limitations include the fact that the study follow‐up was limited to 1 year and therefore the assessment of the negative disease prediction of a negative DS for a period longer than 1 year cannot be made. In conclusion, the results of the IMPACT trial demonstrate that DS is safe and effective for the triage of HPV‐positive women identified during primary HPV screening. DS alone or in combination with HPV16/18 genotyping offers an alternative to current triage strategies which are based on cytology, either alone or combined with HPV16/18 genotyping. DS‐based triage provides consistently higher sensitivity than cytology‐based triage, providing better reassurance against ≥CIN2 and ≥CIN3. Using DS alone as the triage reduces the complexity of triage strategies for HPV‐positive women.
Drs. Stoler and Wright are consultants to Roche, BD Life Sciences, Inovio, and QSquared Solutions. They are speakers for Roche and BD Life Sciences. The other authors are employees of Roche.
All patients enrolled into the trial provided their informed consent before any study procedures. The study protocol and all amendments were approved by an Institutional Review Board (IRB). The trial was conducted in compliance with International Conference on Harmonization (ICH) Good Clinical Practice (GCP) Guidelines, applicable regulations of the U.S. Food and Drug Administration (FDA), and in accordance with the ethical principles originating in the Declaration of Helsinki. This is an observational, noninterventional diagnostic study and in alignment with FDAAA2007, World Health Organization (WHO) and ICMJE was not registered with ClinicalTrials.gov .
Supplementary Figure 1 Flow of subjects through baseline and follow‐up phases of the IMPACT trial Supplementary Figure 2. CONSORT diagram. Triage of cobas 4800 HPV‐positive women Supplementary Figure 3. Risk of ≥CIN3 in cobas 4800 HPV‐positive women dependent on HPV genotype group, cytology, and Dual‐stain results Supplementary Table 1 . Demographic characteristics and HPV vaccination status by Dual‐stain results of the cobas 6800/8800 HPV‐positive study population Supplementary Table 2. HPV genotype and Dual‐stain results by baseline cytology and cumulative 1‐year histology results among cobas 6800/8800 HPV‐positive women Supplementary Table 3. HPV genotype and Dual‐stain results by cytology and baseline histology results among cobas 4800 HPV‐positive women Supplementary Table 4. HPV genotype and Dual‐stain results by baseline cytology and cumulative 1‐year histology results among cobas 4800 HPV‐positive women Supplementary Table 5. Triage of cobas 4800 HPV‐positive women with Dual‐stain and cytology by HPV genotype group: baseline and cumulative 1‐year data for ≥CIN2 and ≥ CIN3 Supplementary Table 6. Triage performance of Dual‐stain and cytology, alone or in combination with HPV16/18 genotyping for detecting ≥CIN2 and ≥ CIN3 in cobas 4800 HPV‐positive women: baseline and cumulative 1‐year data Click here for additional data file.
|
Nurses’ Contributions in Rural Family Medicine Education: A Mixed-Method Approach | 2f8554b1-8039-486e-bb67-27b232deb88f | 8910758 | Family Medicine[mh] | Family medicine education involves various clinical experiences that broaden the scope of practice for family medicine residents. In their experience, nurses are one of their most frequent collaborators . The scope of practice refers to the range of healthcare issues medical professionals can treat . Upon graduating from residency, family physicians are expected to treat various healthcare issues related to patients’ biopsychosocial problems and frequently collaborate with nurses making clinical decisions and providing treatment . In family medicine education, collaboration with nurses is vital for the development of family physicians’ competence. Nurses observe residents frequently in clinical situations, assess their skills, and provide feedback to improve their knowledge, skills, and attitudes [ , , ]. For the effective provision of family medicine education, the involvement of nurses can be critical, leading to a better quality of care for patients . Residents may not be able to perform proper clinical reasoning and decision-making in clinical situations while under pressure despite having the proper medical knowledge [ , , ]. In such situations, nurses’ observations and feedback to residents and senior doctors can improve the quality of family medicine education. Through collaboration with senior doctors and nurses, patients can manage their problems more smoothly . Previous studies have shown that nurses can act as practitioners of patient safety and educators of medical residents . Their observations and feedback to medical residents and senior doctors can make patients safer and lead to more effective patient care [ , , , ]. The level of nurses’ contributions to physicians’ education can differ depending on clinical situations, such as medical resources, the number of senior doctors, and the specialty of education. Rural family medicine residents experience a wide scope of practice owing to their demand in rural hospitals; therefore, nurses’ support and feedback can be more valuable in rural family medicine education . Rural family medicine education may involve various conflicts due to systemic and cultural changes for medical residents, as they may have to change their working styles in adjusting to rural clinical situations [ , , ]. In these processes, as the number of senior doctors is low, nurses may play critical roles in supporting residents’ conflicts . As nurses frequently observe the residents, they could provide residents with various educational recommendations to improve their collaboration. Moreover, effective support and safe netting for patient care should be provided. Furthermore, in rural areas, there is a lack of physicians; therefore, interprofessional collaboration facilitates a good medical education in rural family medicine . Currently, there is a lack of evidence regarding nurses’ contributions to rural family medicine education . In addition, nurses’ difficulties in rural family medicine education have not been clarified. Therefore, the research question was “How do nurses contribute to rural family medicine education, and what difficulties do they experience?” By clarifying nurses’ contribution and difficulties in rural family medicine education, a concrete revision of such education can be executed, which may lead to better interprofessional collaboration in patient care education. Therefore, this research aimed to clarify nurses’ contributions and difficulties in rural family medicine education using a mixed-method approach.
This mixed-method research was conducted to investigate nurses’ contributions and difficulties in relation to family medicine education in a rural hospital using questionnaires (quantitative method) and interviews (qualitative method). Ethnography and interviews were conducted to clarify how nurses perceive their role in rural family medicine education, including the difficulties and the support required. The study duration was from 1 April to 31 December 2021. The researchers were participatory observers; they informed the residents and discussed the study application with the nurses. Additionally, a questionnaire regarding nurses’ ideas of their roles in family medicine residents’ education was provided to the participants to investigate their ideas quantitatively . 2.1. Setting Unnan City is one of the smallest and most remote cities in Japan and is located southeast of an administrative unit in a rural setting. In 2020, the total population of the city was 37,638 (18,145 males and 19,492 females), and 39% were aged over 65 years; this statistic is expected to reach 50% by 2050. This city has 16 clinics, 12 home care stations, 3 visiting nurse stations, and only 1 public hospital. At the time of the study, the Unnan City Hospital had 281 care beds:160 acute care, 43 comprehensive care, 30 rehabilitation, and 48 chronic care beds. The nurse-to-patient ratios were 1:10 for acute care, 1:13 for comprehensive care, 1:15 for rehabilitation, and 1:25 for chronic care. The hospital had 27 physicians, 197 nurses, 7 pharmacists, 15 clinical technicians, 37 therapists, 4 nutritionists, and 34 clerks . 2.2. Educational Curriculum of Family Medicine Education in Unnan City Hospital The educational curriculum is based on the Japanese Primary Care Association’s Board of Family Medicine, which was developed according to the World Standard of Education of Family Medicine . In this curriculum, residents experience various clinical situations with their patients. In their first year, residents worked at a community hospital (Unnan City Hospital) for one year and treated typical diseases in both inpatient and outpatient situations. Additionally, they worked at a rural clinic for 6 months to learn home care and community-oriented primary care. To broaden their scope of practice regarding internal medicine, pediatrics, and emergency medicine, they worked at a general hospital. Each clinical setting included a medical teacher. Residents learned content through cognitive apprenticeship, legitimate peripheral participation, and continuous reflection with medical teachers and students . The formative and summative assessments of the learners were accomplished using Mini-CEX, multiple-source feedback, and portfolios. After 3 years of training, the residents undergo a national examination in family medicine and obtain a family physician’s certificate . In the first year of the training, which began on 1 April medical residents collaborated with various medical professionals at the community hospital. This curriculum can be utilized to educate a maximum of three residents simultaneously. One resident in 2018 and 2019 and three in 2020 and 2021 engaged in the curriculum. 2.3. Participants The participants were registered nurses working in a rural community hospital. They were informed of the research purpose and agreed to participate. They were chosen from all hospital wards. The registered nurses who participated in this study had experience with physicians and nurses in the hospital. In addition, the qualitative interviews involved nurses who were charged with the administration of each ward. Each ward had two or three nurses with administrative roles, and all of them were requested to participate in this research and consented to participation. Overall, 88 nurses completed this distributed questionnaire, and of these, 20 nurses with administrative roles were interviewed based on the results of their questionnaires. 2.4. Data Collection 2.4.1. Questionnaire A questionnaire was provided to the participants regarding their roles in family medicine residents’ education. Based on a previous study, seven items were constructed with respect to the concepts of previous research: nurses as teachers, guardians of patient well-being, providers of emotional support, providers of general support, expert advisors, navigators, and team players . The seven items were as follows: nurses need to provide educational support to medical residents to protect the safety of patients (Item 1: guardians of patient well-being); nurses need to convey the background and concrete information of patients and their families to support the medical care of residents (Item 2: navigators); nurses need to teach how nurses work in wards and how to prepare medical equipment so that residents can work smoothly in wards (Item 3: providers of general support); nurses need to support the medical care of residents to grow their personality and ability (Item 4: nurses as teachers); nurses need to play a supporting role in the emotional changes of residents (Item 5: providers of emotional support); nurses need to provide residents with knowledge regarding patient care as nursing specialists (Item 6: expert advisors); and nurses need to help residents make appropriate decisions in patient care (Item 7: team players). Each item was answered on a five-point Likert scale ranging from strongly agree (five) to strongly disagree (one). In addition, the gender, clinical experience, workplace, and educational background of the participants were collected. 2.4.2. Ethnography and Semi-Structured Interviews The first author performed ethnography and semi-structured interviews with the participants. This researcher’s specialties were family medicine, medical education, and public health. The researcher worked in all hospital wards, observed the interaction between residents and nurses in each ward, and took field notes during this process. During the observation period, the researcher interviewed the nurses. The interviews were performed based on the questionnaire results. The interview guide included three questions. The first question was “How do you feel about the current family medicine education in community hospitals?” - The follow-up questions focused on the accomplishment of their education of family medicine residents. The second question was “What do you think you can do in family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the positive quantitative results for the questionnaire items. The third question was “How do you think your role impedes family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the negative quantitative results for the questionnaire items. Each interview lasted approximately 30 min and was recorded and transcribed verbatim. The transcript was shared with the interviewees to confirm the credibility of the content. 2.5. Data Analysis Quantitative data were analyzed using Student’s t-test and a Chi-square test for the background data. The results of each question regarding nurses’ roles in family medicine education were compared between the characteristics of the wards in which the participants worked: acute or chronic care using Student’s t-test. Regarding qualitative data, the grounded theory approach was used to clarify nurses’ contributions and difficulties in regard to rural family medicine education in rural community hospitals. The first and second authors carefully and thoughtfully read the field notes and transcriptions. After reading them in depth, the third author coded the contents and developed codebooks based on repeated reading of the research materials as the initial coding . This study used process and concept coding . The first author also coded the materials and discussed the coding and codebooks with the third author for coding refinement. In the second coding, the first and third authors induced, merged, deleted, or refined the concepts and themes by going back and forth between the research materials and initial coding . This process was performed through constant discussion until mutual agreement was reached and repeated until no new codes or concepts appeared, indicating saturation. For member checking, analysis was provided to all participants, whose feedback was then included in the final revision of themes and concepts. Eventually, no new themes emerged during member checking, indicating saturation. Finally, the theory was discussed by two authors who ultimately reached an agreement on the final theory. 2.6. Ethical Consideration Before this study, the participants were informed that the collected data would only be used for research purposes. They were also informed of the research aims, how the data would be disclosed, and how their personal information would be protected. The participants then provided written informed consent. This study was approved by the Unnan Hospital Clinical Ethics Committee (approval code: 20210022).
Unnan City is one of the smallest and most remote cities in Japan and is located southeast of an administrative unit in a rural setting. In 2020, the total population of the city was 37,638 (18,145 males and 19,492 females), and 39% were aged over 65 years; this statistic is expected to reach 50% by 2050. This city has 16 clinics, 12 home care stations, 3 visiting nurse stations, and only 1 public hospital. At the time of the study, the Unnan City Hospital had 281 care beds:160 acute care, 43 comprehensive care, 30 rehabilitation, and 48 chronic care beds. The nurse-to-patient ratios were 1:10 for acute care, 1:13 for comprehensive care, 1:15 for rehabilitation, and 1:25 for chronic care. The hospital had 27 physicians, 197 nurses, 7 pharmacists, 15 clinical technicians, 37 therapists, 4 nutritionists, and 34 clerks .
The educational curriculum is based on the Japanese Primary Care Association’s Board of Family Medicine, which was developed according to the World Standard of Education of Family Medicine . In this curriculum, residents experience various clinical situations with their patients. In their first year, residents worked at a community hospital (Unnan City Hospital) for one year and treated typical diseases in both inpatient and outpatient situations. Additionally, they worked at a rural clinic for 6 months to learn home care and community-oriented primary care. To broaden their scope of practice regarding internal medicine, pediatrics, and emergency medicine, they worked at a general hospital. Each clinical setting included a medical teacher. Residents learned content through cognitive apprenticeship, legitimate peripheral participation, and continuous reflection with medical teachers and students . The formative and summative assessments of the learners were accomplished using Mini-CEX, multiple-source feedback, and portfolios. After 3 years of training, the residents undergo a national examination in family medicine and obtain a family physician’s certificate . In the first year of the training, which began on 1 April medical residents collaborated with various medical professionals at the community hospital. This curriculum can be utilized to educate a maximum of three residents simultaneously. One resident in 2018 and 2019 and three in 2020 and 2021 engaged in the curriculum.
The participants were registered nurses working in a rural community hospital. They were informed of the research purpose and agreed to participate. They were chosen from all hospital wards. The registered nurses who participated in this study had experience with physicians and nurses in the hospital. In addition, the qualitative interviews involved nurses who were charged with the administration of each ward. Each ward had two or three nurses with administrative roles, and all of them were requested to participate in this research and consented to participation. Overall, 88 nurses completed this distributed questionnaire, and of these, 20 nurses with administrative roles were interviewed based on the results of their questionnaires.
2.4.1. Questionnaire A questionnaire was provided to the participants regarding their roles in family medicine residents’ education. Based on a previous study, seven items were constructed with respect to the concepts of previous research: nurses as teachers, guardians of patient well-being, providers of emotional support, providers of general support, expert advisors, navigators, and team players . The seven items were as follows: nurses need to provide educational support to medical residents to protect the safety of patients (Item 1: guardians of patient well-being); nurses need to convey the background and concrete information of patients and their families to support the medical care of residents (Item 2: navigators); nurses need to teach how nurses work in wards and how to prepare medical equipment so that residents can work smoothly in wards (Item 3: providers of general support); nurses need to support the medical care of residents to grow their personality and ability (Item 4: nurses as teachers); nurses need to play a supporting role in the emotional changes of residents (Item 5: providers of emotional support); nurses need to provide residents with knowledge regarding patient care as nursing specialists (Item 6: expert advisors); and nurses need to help residents make appropriate decisions in patient care (Item 7: team players). Each item was answered on a five-point Likert scale ranging from strongly agree (five) to strongly disagree (one). In addition, the gender, clinical experience, workplace, and educational background of the participants were collected. 2.4.2. Ethnography and Semi-Structured Interviews The first author performed ethnography and semi-structured interviews with the participants. This researcher’s specialties were family medicine, medical education, and public health. The researcher worked in all hospital wards, observed the interaction between residents and nurses in each ward, and took field notes during this process. During the observation period, the researcher interviewed the nurses. The interviews were performed based on the questionnaire results. The interview guide included three questions. The first question was “How do you feel about the current family medicine education in community hospitals?” - The follow-up questions focused on the accomplishment of their education of family medicine residents. The second question was “What do you think you can do in family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the positive quantitative results for the questionnaire items. The third question was “How do you think your role impedes family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the negative quantitative results for the questionnaire items. Each interview lasted approximately 30 min and was recorded and transcribed verbatim. The transcript was shared with the interviewees to confirm the credibility of the content.
A questionnaire was provided to the participants regarding their roles in family medicine residents’ education. Based on a previous study, seven items were constructed with respect to the concepts of previous research: nurses as teachers, guardians of patient well-being, providers of emotional support, providers of general support, expert advisors, navigators, and team players . The seven items were as follows: nurses need to provide educational support to medical residents to protect the safety of patients (Item 1: guardians of patient well-being); nurses need to convey the background and concrete information of patients and their families to support the medical care of residents (Item 2: navigators); nurses need to teach how nurses work in wards and how to prepare medical equipment so that residents can work smoothly in wards (Item 3: providers of general support); nurses need to support the medical care of residents to grow their personality and ability (Item 4: nurses as teachers); nurses need to play a supporting role in the emotional changes of residents (Item 5: providers of emotional support); nurses need to provide residents with knowledge regarding patient care as nursing specialists (Item 6: expert advisors); and nurses need to help residents make appropriate decisions in patient care (Item 7: team players). Each item was answered on a five-point Likert scale ranging from strongly agree (five) to strongly disagree (one). In addition, the gender, clinical experience, workplace, and educational background of the participants were collected.
The first author performed ethnography and semi-structured interviews with the participants. This researcher’s specialties were family medicine, medical education, and public health. The researcher worked in all hospital wards, observed the interaction between residents and nurses in each ward, and took field notes during this process. During the observation period, the researcher interviewed the nurses. The interviews were performed based on the questionnaire results. The interview guide included three questions. The first question was “How do you feel about the current family medicine education in community hospitals?” - The follow-up questions focused on the accomplishment of their education of family medicine residents. The second question was “What do you think you can do in family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the positive quantitative results for the questionnaire items. The third question was “How do you think your role impedes family medicine education at a community hospital?” - The follow-up questions focused on how the nurses educated family medicine residents as per the negative quantitative results for the questionnaire items. Each interview lasted approximately 30 min and was recorded and transcribed verbatim. The transcript was shared with the interviewees to confirm the credibility of the content.
Quantitative data were analyzed using Student’s t-test and a Chi-square test for the background data. The results of each question regarding nurses’ roles in family medicine education were compared between the characteristics of the wards in which the participants worked: acute or chronic care using Student’s t-test. Regarding qualitative data, the grounded theory approach was used to clarify nurses’ contributions and difficulties in regard to rural family medicine education in rural community hospitals. The first and second authors carefully and thoughtfully read the field notes and transcriptions. After reading them in depth, the third author coded the contents and developed codebooks based on repeated reading of the research materials as the initial coding . This study used process and concept coding . The first author also coded the materials and discussed the coding and codebooks with the third author for coding refinement. In the second coding, the first and third authors induced, merged, deleted, or refined the concepts and themes by going back and forth between the research materials and initial coding . This process was performed through constant discussion until mutual agreement was reached and repeated until no new codes or concepts appeared, indicating saturation. For member checking, analysis was provided to all participants, whose feedback was then included in the final revision of themes and concepts. Eventually, no new themes emerged during member checking, indicating saturation. Finally, the theory was discussed by two authors who ultimately reached an agreement on the final theory.
Before this study, the participants were informed that the collected data would only be used for research purposes. They were also informed of the research aims, how the data would be disclosed, and how their personal information would be protected. The participants then provided written informed consent. This study was approved by the Unnan Hospital Clinical Ethics Committee (approval code: 20210022).
3.1. Results of the Questionnaire on Nurses’ Roles in Family Medicine Education The nurses’ average clinical experience was 20.16 years (standard deviation [SD] = 8.86), and most participants graduated from specialized nursing schools. All of the participants were women. Regarding the questionnaire, the scores for items on “nurses as teachers” and “providers of emotional support” were statistically lower among the participants working in acute care wards than those working in chronic care wards ( p = 0.024 and 0.047, respectively). The other items regarding nurses’ roles—“guardians of patient wellbeing”, “navigators”, “providers of general support”, “expert advisors”, and “team players”—had higher scores than the roles of “nurses as teachers” and “providers of emotional support” but did not report any significant statistical differences ( ). 3.2. Results of the Qualitative Analysis Regarding the Nurses’ Roles in Family Medicine Education A total of 34 pages of fieldnotes were composed. A total of 20 nurses were interviewed (9 from acute care wards and 11 from chronic care wards). There were eight nursing directors and 12 semi-directors. Three themes and eight concepts were extracted using the grounded theory approach. The themes were nurturing professionalism, driving interprofessional collaboration, and respect for the environment and nurses’ competence ( ). Each theme was explained based on the relevant concepts and related quotations. 3.3. Nurturing Professionalism The nurses interacted with the family medicine residents and realized the residents should improve their quality of professionalism while caring for patients as medical doctors. They attempted to discuss these behaviors with the residents as authentic physicians. Residents tended to decide on various treatments and care based primarily on medical aspects. However, the nurses attempted to incorporate into residents’ decision-making respect for patients’ backgrounds to facilitate effective care because various ethical decisions are made in geriatric medicine. When the residents struggled with ethical decision-making, the nurses conversed with the residents to support their discussions with patients and their families to effectively address these issues. Consistent with our quantitative results, nurses functioned as guardians of patient well-being, navigators, and expert advisors. 3.3.1. Responsibility as a Physician The nurses realized that residents needed to modify their attitudes toward patients, families, and other medical staff as professional physicians, which could be supported by the nurses. Family medicine residents were trained in various medical situations by their teachers but had no experience of authentic responsibility for patient care in previous situations. In the rural hospital, they had to determine patients’ treatment and care plans in outpatient and inpatient departments. In these processes, the residents’ ambiguous attitudes and vague decision-making confused the nurses. Nurses tried to modify residents’ attitudes toward medical care. They realized that before determining medical care, medical residents should nurture themselves as authentic physicians. Participants stated the following: “ I understand that the medical residents learned a lot about medical issues. However, their attitudes toward patient care may not be authentic. ” (Participant 1, acute care ward) “ Medical residents’ vague attitude toward patients is dangerous for patient care. They should recognize the responsibility of doctors as medical professionals. ” (Participant 5, chronic care ward) “ The residents’ attitudes as professionals can be nurtured through the experiences and discussions with teachers and us. Therefore, I think that nurses play a role in improving residents’ professionalism. ” (Participant 2, acute care ward) The nurses experienced difficulties in patient care due to the residents’ low quality of professionalism; however, they attempted to discuss the residents’ professionalism and nurture them through clinical experiences and collaboration with nurses. 3.3.2. Respecting Patients’ Backgrounds The nurses observed that the medical residents did not respect patients’ backgrounds and needed to include psychosocial aspects in their decision-making. Residents’ decision-making primarily focused on biomedical conditions and did not consider patients’ lives in their homes or nursing homes. The nurses knew that the medical residents learned the biopsychosocial model, in which family physicians understand patients from not only a biomedical perspective but also a psychosocial perspective to provide better care. However, the residents’ skills and attitudes needed to be enhanced through clinical experience and collaboration with nurses. Participants stated the following: “ The medical residents did not understand the patients’ lives in their homes. They should respect the patients’ lives while respecting their quality of life at home. Their medical decisions may improve medical conditions, but they detach patients to discharge in their home. Their decisions should be supported by nurses considering various patient contexts .” (Participant 12, chronic care wards) “ I understand that the medical residents tried to understand patients’ conditions from various perspectives. Understanding patients in the context of their lives requires numerous experiences. Nurses try to support their learning and medical decisions with respect to the patient’s background. Medical residents can accept our suggestions and improve their skills and attitudes effectively. ” (Participant 3, acute care wards) The nurses realized that they played a role in education regarding the biopsychosocial approach. Their support in resident education required humility and honesty from the medical residents. 3.3.3. Enhancing Ethical Attitude Through residents’ improvement as authentic physicians and respecting patients’ backgrounds, the nurses believed that medical residents could learn about ethical attitudes in the treatment of older and frail patients through discussion with nurses. Medical residents are exposed to various ethical issues when caring for older patients. The residents struggled with ethical decision-making because of their lack of experience. The nurses realized that they could effectively support the residents through dialogue regarding patients’ quality of life. Participants stated the following: “ Ethical decisions are complicated and challenging. Medical residents often struggle to consider their patients’ conditions, such as discussions about life extension, gastrostomy, and the possibility of home care .” (Participant 20, chronic care ward) “ The medical residents lack the experiences of decision making about ethical issues. Thus, experienced nurses support their decision making and inform them about the patient’s context and their family’s ideas for their lives .” (Participant 7, acute care ward) The nurses respected patients’ lives from various perspectives and supported medical residents in making ethical decisions by providing important information when they were struggling. Nurses’ contribution to modifying residents’ attitudes as authentic physicians and incorporating respect for patients’ backgrounds improved residents’ professionalism and enhanced their ethical consideration. 3.4. Driving Interprofessional Collaboration Nurses emphasized the importance of collaboration. They believed that medical teachers, as well as nurses, should first give detailed feedback to the residents regarding effective collaboration. In clinical situations, the nurses educated residents regarding the collaboration among medical professionals. In addition, nurses considered that through various educational insights and experiences, the medical residents could realize the effectiveness of interprofessional collaboration in improving patient care. Thus, consistent with the quantitative results, nurses functioned as providers of general support and team players. 3.4.1. Getting Feedback from Teachers and Nurses Nurses realized that medical students could collaborate with other medical professionals regarding their mental conditions and discuss their collaborative ideas with medical teachers and nurses. Mutual understanding is essential for collaboration. The nurses hoped to educate residents regarding nurses’ roles in resident education through discussions with medical teachers. Participants stated the following: “ I tried to educate the medical residents. I hope to learn how to improve my role in their education. To get feedback on the education, medical teachers and residents should discuss education with and provide feedback to the nurses .” (Participant 5, chronic care ward) “ Collaboration between the teachers and the residents is essential. As one of the teachers, I want to improve my role in education and try to give and obtain feedback from residents and medical teachers .” (Participant 11, acute care ward) Nurses wanted mutual feedback on residents’ education to improve their role in the education. For improvement, medical teachers and nurses need to provide feedback to the residents regarding interprofessional collaboration. Moreover, providing feedback on nurses’ contributions to resident education is required to improve education. 3.4.2. Importance of Dialogue with Other Professionals Nurses considered that medical residents could learn more about the importance of dialogue with other professionals. In practice, the residents had to collaborate with various professionals as care for older patients requires various types of professional care during and after admission. The collaborative process demanded that the residents engage in discussions with other professionals. During this process, nurses educated the residents regarding dialogue with other professionals. One of the participants stated, “ I had many discussions with the medical residents regarding patient care. Initially, they attempted to collaborate with other professionals. However, they tended to be persistent in their opinions without accepting nurses’ ideas. They should have made decisions based on dialogue with the nurses ” (Participant 13, chronic care ward). Through frequent dialogue with nurses and other medical professionals, medical residents can collaborate with nurses effectively. The nurses realized that their facilitation and the residents’ acceptance of the nurses’ positive feedback improved their interprofessional collaboration. Participants stated the following: “ The medical residents could improve their understanding and skills in collaboration with other medical professionals. Now, they are trying to respect the various ideas of other professionals .” (Participant 6, acute care ward) “ They could become honest in patients’ care through various clinical experiences. They could now accept various ideas .” (Participant 10, chronic care ward) 3.4.3. Quality Improvement of Care through Collaboration For better patient care, the nurses considered that residents’ education should include teaching the effectiveness of interprofessional collaboration. The nurses made an effort to support the residents to facilitate a better quality of care for the medical team. One participant stated: “ For a true understanding of the importance of interprofessional collaboration, residents should understand the effectiveness of interprofessional collaboration. Therefore, I have tried to advise medical residents to improve patient care by respecting other professional perspectives .” (Participant 19, chronic care ward) Understanding the effectiveness of interprofessional collaboration in clinical experience has improved residents’ understanding. The nurses observed the medical residents’ changing skills and attitudes toward interprofessional collaboration. One of the participants stated, “ I think that medical residents could realize the effectiveness of interprofessional collaboration. In the dialogue and discussion with the medical residents, I felt that they tried to obtain advice from other professionals ” (Participant 2, acute care ward). 3.5. Respect for the Environment and Nurses’ Competence The nurses considered that medical residents should respect the culture of wards and nursing. As the medical residents came to a rural hospital from a tertiary hospital, their working environment changed drastically. The initial work of the residents did not match the ward environment and nursing culture. The nurses attempted to educate residents on their working styles. In addition, the nursing education of the residents was burdensome in regard to their work. Nurses believed that their competence and their educational burden should be respected by medical teachers and residents. As reported by the quantitative results, the nurses realized their educational role as nurses acting as teachers and providers of emotional support, but their working conditions led to inadequate support for medical residents. 3.5.1. Understanding Working Environments and Culture The nurses considered that medical residents had changed their working style in the hospital. Initially, residents’ working styles impinged on nurses’ usual jobs. In addition, the nurses believed that residents should understand how nurses think and work in each ward. One of the participants stated, “ The medical residents tended to order various tests for patients with various timing, not respecting the nurses’ work. Emergency situations may require temporal testing for diagnosis and treatment, but in their situations, they should have ordered them by observing the nurses’ work and asking us whether the testing was possible ” (Participant 8, chronic care ward). The nurses had their own culture regarding patient care and their respected order of work, such as ways of approaching patients. The dialogue between the medical residents and nurses as well as nurses’ education of residents related to their working styles and work culture stimulated the residents’ learning regarding the culture, which moderated their behaviors. One participant stated, “ The dialogue with the medical residents was important. By having various conversations with the residents, they gradually understood our working styles and culture, especially approaches toward patients. I concretely educated some residents about the nurses’ working styles. The residents tried to change their attitudes and timing in approaching the patients to avoid interrupting nursing care ” (Participant 18, chronic care ward). 3.5.2. Working with Respect for the Nurses’ Competence The nurses considered that the working structure for nurses affected their role in medical residents’ education. They had many routine tasks and could not support residents who were experiencing mental stress. The nurses felt that their work-related competence should be respected. One of the participants stated, “ Nurses’ routine work is tight. I feel that I should follow the residents’ feelings after dealing with the severe situations of their patients, but I could not support the medical residents dealing with difficult cases with social problems frequently because of the difficulties of working ” (Participant 9, chronic care ward). In rural contexts, the workforce is limited to rural hospitals. The nurses hope that their educational role is respected; however, the follow-up of medical residents should be performed instead by medical teachers and other medical professionals. One of the participants stated, “ The rural hospital needs more comprehensive methods to educate the medical residents. A lack of work can disrupt education. Various professionals should be involved in education and share their challenges to improve educational systems ” (Participant 4, acute care ward).
The nurses’ average clinical experience was 20.16 years (standard deviation [SD] = 8.86), and most participants graduated from specialized nursing schools. All of the participants were women. Regarding the questionnaire, the scores for items on “nurses as teachers” and “providers of emotional support” were statistically lower among the participants working in acute care wards than those working in chronic care wards ( p = 0.024 and 0.047, respectively). The other items regarding nurses’ roles—“guardians of patient wellbeing”, “navigators”, “providers of general support”, “expert advisors”, and “team players”—had higher scores than the roles of “nurses as teachers” and “providers of emotional support” but did not report any significant statistical differences ( ).
A total of 34 pages of fieldnotes were composed. A total of 20 nurses were interviewed (9 from acute care wards and 11 from chronic care wards). There were eight nursing directors and 12 semi-directors. Three themes and eight concepts were extracted using the grounded theory approach. The themes were nurturing professionalism, driving interprofessional collaboration, and respect for the environment and nurses’ competence ( ). Each theme was explained based on the relevant concepts and related quotations.
The nurses interacted with the family medicine residents and realized the residents should improve their quality of professionalism while caring for patients as medical doctors. They attempted to discuss these behaviors with the residents as authentic physicians. Residents tended to decide on various treatments and care based primarily on medical aspects. However, the nurses attempted to incorporate into residents’ decision-making respect for patients’ backgrounds to facilitate effective care because various ethical decisions are made in geriatric medicine. When the residents struggled with ethical decision-making, the nurses conversed with the residents to support their discussions with patients and their families to effectively address these issues. Consistent with our quantitative results, nurses functioned as guardians of patient well-being, navigators, and expert advisors. 3.3.1. Responsibility as a Physician The nurses realized that residents needed to modify their attitudes toward patients, families, and other medical staff as professional physicians, which could be supported by the nurses. Family medicine residents were trained in various medical situations by their teachers but had no experience of authentic responsibility for patient care in previous situations. In the rural hospital, they had to determine patients’ treatment and care plans in outpatient and inpatient departments. In these processes, the residents’ ambiguous attitudes and vague decision-making confused the nurses. Nurses tried to modify residents’ attitudes toward medical care. They realized that before determining medical care, medical residents should nurture themselves as authentic physicians. Participants stated the following: “ I understand that the medical residents learned a lot about medical issues. However, their attitudes toward patient care may not be authentic. ” (Participant 1, acute care ward) “ Medical residents’ vague attitude toward patients is dangerous for patient care. They should recognize the responsibility of doctors as medical professionals. ” (Participant 5, chronic care ward) “ The residents’ attitudes as professionals can be nurtured through the experiences and discussions with teachers and us. Therefore, I think that nurses play a role in improving residents’ professionalism. ” (Participant 2, acute care ward) The nurses experienced difficulties in patient care due to the residents’ low quality of professionalism; however, they attempted to discuss the residents’ professionalism and nurture them through clinical experiences and collaboration with nurses. 3.3.2. Respecting Patients’ Backgrounds The nurses observed that the medical residents did not respect patients’ backgrounds and needed to include psychosocial aspects in their decision-making. Residents’ decision-making primarily focused on biomedical conditions and did not consider patients’ lives in their homes or nursing homes. The nurses knew that the medical residents learned the biopsychosocial model, in which family physicians understand patients from not only a biomedical perspective but also a psychosocial perspective to provide better care. However, the residents’ skills and attitudes needed to be enhanced through clinical experience and collaboration with nurses. Participants stated the following: “ The medical residents did not understand the patients’ lives in their homes. They should respect the patients’ lives while respecting their quality of life at home. Their medical decisions may improve medical conditions, but they detach patients to discharge in their home. Their decisions should be supported by nurses considering various patient contexts .” (Participant 12, chronic care wards) “ I understand that the medical residents tried to understand patients’ conditions from various perspectives. Understanding patients in the context of their lives requires numerous experiences. Nurses try to support their learning and medical decisions with respect to the patient’s background. Medical residents can accept our suggestions and improve their skills and attitudes effectively. ” (Participant 3, acute care wards) The nurses realized that they played a role in education regarding the biopsychosocial approach. Their support in resident education required humility and honesty from the medical residents. 3.3.3. Enhancing Ethical Attitude Through residents’ improvement as authentic physicians and respecting patients’ backgrounds, the nurses believed that medical residents could learn about ethical attitudes in the treatment of older and frail patients through discussion with nurses. Medical residents are exposed to various ethical issues when caring for older patients. The residents struggled with ethical decision-making because of their lack of experience. The nurses realized that they could effectively support the residents through dialogue regarding patients’ quality of life. Participants stated the following: “ Ethical decisions are complicated and challenging. Medical residents often struggle to consider their patients’ conditions, such as discussions about life extension, gastrostomy, and the possibility of home care .” (Participant 20, chronic care ward) “ The medical residents lack the experiences of decision making about ethical issues. Thus, experienced nurses support their decision making and inform them about the patient’s context and their family’s ideas for their lives .” (Participant 7, acute care ward) The nurses respected patients’ lives from various perspectives and supported medical residents in making ethical decisions by providing important information when they were struggling. Nurses’ contribution to modifying residents’ attitudes as authentic physicians and incorporating respect for patients’ backgrounds improved residents’ professionalism and enhanced their ethical consideration.
The nurses realized that residents needed to modify their attitudes toward patients, families, and other medical staff as professional physicians, which could be supported by the nurses. Family medicine residents were trained in various medical situations by their teachers but had no experience of authentic responsibility for patient care in previous situations. In the rural hospital, they had to determine patients’ treatment and care plans in outpatient and inpatient departments. In these processes, the residents’ ambiguous attitudes and vague decision-making confused the nurses. Nurses tried to modify residents’ attitudes toward medical care. They realized that before determining medical care, medical residents should nurture themselves as authentic physicians. Participants stated the following: “ I understand that the medical residents learned a lot about medical issues. However, their attitudes toward patient care may not be authentic. ” (Participant 1, acute care ward) “ Medical residents’ vague attitude toward patients is dangerous for patient care. They should recognize the responsibility of doctors as medical professionals. ” (Participant 5, chronic care ward) “ The residents’ attitudes as professionals can be nurtured through the experiences and discussions with teachers and us. Therefore, I think that nurses play a role in improving residents’ professionalism. ” (Participant 2, acute care ward) The nurses experienced difficulties in patient care due to the residents’ low quality of professionalism; however, they attempted to discuss the residents’ professionalism and nurture them through clinical experiences and collaboration with nurses.
The nurses observed that the medical residents did not respect patients’ backgrounds and needed to include psychosocial aspects in their decision-making. Residents’ decision-making primarily focused on biomedical conditions and did not consider patients’ lives in their homes or nursing homes. The nurses knew that the medical residents learned the biopsychosocial model, in which family physicians understand patients from not only a biomedical perspective but also a psychosocial perspective to provide better care. However, the residents’ skills and attitudes needed to be enhanced through clinical experience and collaboration with nurses. Participants stated the following: “ The medical residents did not understand the patients’ lives in their homes. They should respect the patients’ lives while respecting their quality of life at home. Their medical decisions may improve medical conditions, but they detach patients to discharge in their home. Their decisions should be supported by nurses considering various patient contexts .” (Participant 12, chronic care wards) “ I understand that the medical residents tried to understand patients’ conditions from various perspectives. Understanding patients in the context of their lives requires numerous experiences. Nurses try to support their learning and medical decisions with respect to the patient’s background. Medical residents can accept our suggestions and improve their skills and attitudes effectively. ” (Participant 3, acute care wards) The nurses realized that they played a role in education regarding the biopsychosocial approach. Their support in resident education required humility and honesty from the medical residents.
Through residents’ improvement as authentic physicians and respecting patients’ backgrounds, the nurses believed that medical residents could learn about ethical attitudes in the treatment of older and frail patients through discussion with nurses. Medical residents are exposed to various ethical issues when caring for older patients. The residents struggled with ethical decision-making because of their lack of experience. The nurses realized that they could effectively support the residents through dialogue regarding patients’ quality of life. Participants stated the following: “ Ethical decisions are complicated and challenging. Medical residents often struggle to consider their patients’ conditions, such as discussions about life extension, gastrostomy, and the possibility of home care .” (Participant 20, chronic care ward) “ The medical residents lack the experiences of decision making about ethical issues. Thus, experienced nurses support their decision making and inform them about the patient’s context and their family’s ideas for their lives .” (Participant 7, acute care ward) The nurses respected patients’ lives from various perspectives and supported medical residents in making ethical decisions by providing important information when they were struggling. Nurses’ contribution to modifying residents’ attitudes as authentic physicians and incorporating respect for patients’ backgrounds improved residents’ professionalism and enhanced their ethical consideration.
Nurses emphasized the importance of collaboration. They believed that medical teachers, as well as nurses, should first give detailed feedback to the residents regarding effective collaboration. In clinical situations, the nurses educated residents regarding the collaboration among medical professionals. In addition, nurses considered that through various educational insights and experiences, the medical residents could realize the effectiveness of interprofessional collaboration in improving patient care. Thus, consistent with the quantitative results, nurses functioned as providers of general support and team players. 3.4.1. Getting Feedback from Teachers and Nurses Nurses realized that medical students could collaborate with other medical professionals regarding their mental conditions and discuss their collaborative ideas with medical teachers and nurses. Mutual understanding is essential for collaboration. The nurses hoped to educate residents regarding nurses’ roles in resident education through discussions with medical teachers. Participants stated the following: “ I tried to educate the medical residents. I hope to learn how to improve my role in their education. To get feedback on the education, medical teachers and residents should discuss education with and provide feedback to the nurses .” (Participant 5, chronic care ward) “ Collaboration between the teachers and the residents is essential. As one of the teachers, I want to improve my role in education and try to give and obtain feedback from residents and medical teachers .” (Participant 11, acute care ward) Nurses wanted mutual feedback on residents’ education to improve their role in the education. For improvement, medical teachers and nurses need to provide feedback to the residents regarding interprofessional collaboration. Moreover, providing feedback on nurses’ contributions to resident education is required to improve education. 3.4.2. Importance of Dialogue with Other Professionals Nurses considered that medical residents could learn more about the importance of dialogue with other professionals. In practice, the residents had to collaborate with various professionals as care for older patients requires various types of professional care during and after admission. The collaborative process demanded that the residents engage in discussions with other professionals. During this process, nurses educated the residents regarding dialogue with other professionals. One of the participants stated, “ I had many discussions with the medical residents regarding patient care. Initially, they attempted to collaborate with other professionals. However, they tended to be persistent in their opinions without accepting nurses’ ideas. They should have made decisions based on dialogue with the nurses ” (Participant 13, chronic care ward). Through frequent dialogue with nurses and other medical professionals, medical residents can collaborate with nurses effectively. The nurses realized that their facilitation and the residents’ acceptance of the nurses’ positive feedback improved their interprofessional collaboration. Participants stated the following: “ The medical residents could improve their understanding and skills in collaboration with other medical professionals. Now, they are trying to respect the various ideas of other professionals .” (Participant 6, acute care ward) “ They could become honest in patients’ care through various clinical experiences. They could now accept various ideas .” (Participant 10, chronic care ward) 3.4.3. Quality Improvement of Care through Collaboration For better patient care, the nurses considered that residents’ education should include teaching the effectiveness of interprofessional collaboration. The nurses made an effort to support the residents to facilitate a better quality of care for the medical team. One participant stated: “ For a true understanding of the importance of interprofessional collaboration, residents should understand the effectiveness of interprofessional collaboration. Therefore, I have tried to advise medical residents to improve patient care by respecting other professional perspectives .” (Participant 19, chronic care ward) Understanding the effectiveness of interprofessional collaboration in clinical experience has improved residents’ understanding. The nurses observed the medical residents’ changing skills and attitudes toward interprofessional collaboration. One of the participants stated, “ I think that medical residents could realize the effectiveness of interprofessional collaboration. In the dialogue and discussion with the medical residents, I felt that they tried to obtain advice from other professionals ” (Participant 2, acute care ward).
Nurses realized that medical students could collaborate with other medical professionals regarding their mental conditions and discuss their collaborative ideas with medical teachers and nurses. Mutual understanding is essential for collaboration. The nurses hoped to educate residents regarding nurses’ roles in resident education through discussions with medical teachers. Participants stated the following: “ I tried to educate the medical residents. I hope to learn how to improve my role in their education. To get feedback on the education, medical teachers and residents should discuss education with and provide feedback to the nurses .” (Participant 5, chronic care ward) “ Collaboration between the teachers and the residents is essential. As one of the teachers, I want to improve my role in education and try to give and obtain feedback from residents and medical teachers .” (Participant 11, acute care ward) Nurses wanted mutual feedback on residents’ education to improve their role in the education. For improvement, medical teachers and nurses need to provide feedback to the residents regarding interprofessional collaboration. Moreover, providing feedback on nurses’ contributions to resident education is required to improve education.
Nurses considered that medical residents could learn more about the importance of dialogue with other professionals. In practice, the residents had to collaborate with various professionals as care for older patients requires various types of professional care during and after admission. The collaborative process demanded that the residents engage in discussions with other professionals. During this process, nurses educated the residents regarding dialogue with other professionals. One of the participants stated, “ I had many discussions with the medical residents regarding patient care. Initially, they attempted to collaborate with other professionals. However, they tended to be persistent in their opinions without accepting nurses’ ideas. They should have made decisions based on dialogue with the nurses ” (Participant 13, chronic care ward). Through frequent dialogue with nurses and other medical professionals, medical residents can collaborate with nurses effectively. The nurses realized that their facilitation and the residents’ acceptance of the nurses’ positive feedback improved their interprofessional collaboration. Participants stated the following: “ The medical residents could improve their understanding and skills in collaboration with other medical professionals. Now, they are trying to respect the various ideas of other professionals .” (Participant 6, acute care ward) “ They could become honest in patients’ care through various clinical experiences. They could now accept various ideas .” (Participant 10, chronic care ward)
For better patient care, the nurses considered that residents’ education should include teaching the effectiveness of interprofessional collaboration. The nurses made an effort to support the residents to facilitate a better quality of care for the medical team. One participant stated: “ For a true understanding of the importance of interprofessional collaboration, residents should understand the effectiveness of interprofessional collaboration. Therefore, I have tried to advise medical residents to improve patient care by respecting other professional perspectives .” (Participant 19, chronic care ward) Understanding the effectiveness of interprofessional collaboration in clinical experience has improved residents’ understanding. The nurses observed the medical residents’ changing skills and attitudes toward interprofessional collaboration. One of the participants stated, “ I think that medical residents could realize the effectiveness of interprofessional collaboration. In the dialogue and discussion with the medical residents, I felt that they tried to obtain advice from other professionals ” (Participant 2, acute care ward).
The nurses considered that medical residents should respect the culture of wards and nursing. As the medical residents came to a rural hospital from a tertiary hospital, their working environment changed drastically. The initial work of the residents did not match the ward environment and nursing culture. The nurses attempted to educate residents on their working styles. In addition, the nursing education of the residents was burdensome in regard to their work. Nurses believed that their competence and their educational burden should be respected by medical teachers and residents. As reported by the quantitative results, the nurses realized their educational role as nurses acting as teachers and providers of emotional support, but their working conditions led to inadequate support for medical residents. 3.5.1. Understanding Working Environments and Culture The nurses considered that medical residents had changed their working style in the hospital. Initially, residents’ working styles impinged on nurses’ usual jobs. In addition, the nurses believed that residents should understand how nurses think and work in each ward. One of the participants stated, “ The medical residents tended to order various tests for patients with various timing, not respecting the nurses’ work. Emergency situations may require temporal testing for diagnosis and treatment, but in their situations, they should have ordered them by observing the nurses’ work and asking us whether the testing was possible ” (Participant 8, chronic care ward). The nurses had their own culture regarding patient care and their respected order of work, such as ways of approaching patients. The dialogue between the medical residents and nurses as well as nurses’ education of residents related to their working styles and work culture stimulated the residents’ learning regarding the culture, which moderated their behaviors. One participant stated, “ The dialogue with the medical residents was important. By having various conversations with the residents, they gradually understood our working styles and culture, especially approaches toward patients. I concretely educated some residents about the nurses’ working styles. The residents tried to change their attitudes and timing in approaching the patients to avoid interrupting nursing care ” (Participant 18, chronic care ward). 3.5.2. Working with Respect for the Nurses’ Competence The nurses considered that the working structure for nurses affected their role in medical residents’ education. They had many routine tasks and could not support residents who were experiencing mental stress. The nurses felt that their work-related competence should be respected. One of the participants stated, “ Nurses’ routine work is tight. I feel that I should follow the residents’ feelings after dealing with the severe situations of their patients, but I could not support the medical residents dealing with difficult cases with social problems frequently because of the difficulties of working ” (Participant 9, chronic care ward). In rural contexts, the workforce is limited to rural hospitals. The nurses hope that their educational role is respected; however, the follow-up of medical residents should be performed instead by medical teachers and other medical professionals. One of the participants stated, “ The rural hospital needs more comprehensive methods to educate the medical residents. A lack of work can disrupt education. Various professionals should be involved in education and share their challenges to improve educational systems ” (Participant 4, acute care ward).
The nurses considered that medical residents had changed their working style in the hospital. Initially, residents’ working styles impinged on nurses’ usual jobs. In addition, the nurses believed that residents should understand how nurses think and work in each ward. One of the participants stated, “ The medical residents tended to order various tests for patients with various timing, not respecting the nurses’ work. Emergency situations may require temporal testing for diagnosis and treatment, but in their situations, they should have ordered them by observing the nurses’ work and asking us whether the testing was possible ” (Participant 8, chronic care ward). The nurses had their own culture regarding patient care and their respected order of work, such as ways of approaching patients. The dialogue between the medical residents and nurses as well as nurses’ education of residents related to their working styles and work culture stimulated the residents’ learning regarding the culture, which moderated their behaviors. One participant stated, “ The dialogue with the medical residents was important. By having various conversations with the residents, they gradually understood our working styles and culture, especially approaches toward patients. I concretely educated some residents about the nurses’ working styles. The residents tried to change their attitudes and timing in approaching the patients to avoid interrupting nursing care ” (Participant 18, chronic care ward).
The nurses considered that the working structure for nurses affected their role in medical residents’ education. They had many routine tasks and could not support residents who were experiencing mental stress. The nurses felt that their work-related competence should be respected. One of the participants stated, “ Nurses’ routine work is tight. I feel that I should follow the residents’ feelings after dealing with the severe situations of their patients, but I could not support the medical residents dealing with difficult cases with social problems frequently because of the difficulties of working ” (Participant 9, chronic care ward). In rural contexts, the workforce is limited to rural hospitals. The nurses hope that their educational role is respected; however, the follow-up of medical residents should be performed instead by medical teachers and other medical professionals. One of the participants stated, “ The rural hospital needs more comprehensive methods to educate the medical residents. A lack of work can disrupt education. Various professionals should be involved in education and share their challenges to improve educational systems ” (Participant 4, acute care ward).
This study clarified nurses’ contributions to rural family medicine education. Quantitative analysis showed that their contributions in terms of teaching nursing and emotional support were low overall and lower among nurses working in acute care wards than among those working in chronic care wards, which may be affected by their busy working conditions. The qualitative analysis clarified that rural nurses’ contribution to rural family medicine education focused on education regarding professionalism, interprofessional collaboration, and respect for nurses’ working environment and competence. In addition, they struggle to educate medical residents on busy routine work; this education should be supplemented by various medical professionals. Based on the quantitative results, rural nurses’ busy working conditions may affect their ideas of the roles of teaching nursing and providing emotional support in acute care wards. In this study, rural nurses perceived rural family medical education negatively regarding teaching nursing and providing emotional support to the residents, and the trend was stronger among nurses working in acute care wards. Working conditions can affect the quality of education due to the staff’s lack of time and mental stress . Patient characteristics in acute care wards may affect rural nurses’ ideas regarding education . For patients with acute care conditions, various medical professionals must promptly act during care . In such situations, educational attitudes may be impeded, and specific educational skills should be taught . However, there is a lack of education on how to teach medical residents in rural areas. Working stress and the characteristics of patients may affect rural nurses’ ideas of their role in education. The qualitative analysis extracted three themes regarding rural nurses’ contributions to family medicine education: nurturing professionalism, driving interprofessional collaboration, and respect for nurses’ working environments and competencies. The first theme, nurturing professionalism, demonstrated the importance of nurses’ involvement in medical residents’ professional education. As this study shows, in the process of educating physicians, they are educated to be medical professionals and should be responsible for patient care. To become a medical professional, doctors need various clinical experiences based on knowledge from medical teachers . Nurses can observe medical residents’ behaviors more frequently than other professionals and support their practices . Nurses’ involvement in education regarding professionalism is essential in rural areas with a minimal workforce . The second theme, driving interprofessional collaboration, suggests that effective education regarding ethical aspects can be facilitated by interprofessional collaboration in family medicine education. Older, frail patients have multiple biopsychosocial problems impinging on their quality of life . Like this article and previous research show, medical residents struggle to manage their treatment and care . Nurses and other medical professionals support treatment and care while respecting patients’ and families’ decisions . In the process of interprofessional collaboration, medical residents can learn how to manage ethical issues, such as the value of extending lives and resuscitation among terminal and older patients . Learning can be driven through dialogue and reflection with not only medical teachers but also nurses . Interprofessional collaboration, including ethical issues, should be promoted in rural family medical education. Improved interprofessional collaboration could contribute to better-quality patient care. The third theme, respect for nurses’ working environment and competence, indicates that effective family medicine education in rural areas requires nurses to educate medical residents regarding their working conditions and competence to reduce the burden on nurses. For such education, comprehensive approaches involving various professionals in the process of education, which can lead to better community care, can be essential. In this research, rural nurses struggled to educate medical residents in busy routine work, which should be communicated to medical residents for their effective work and education. The involvement of not only nurses and medical teachers but also social workers and therapists in educational processes can be effective if their specialties are respected [ , , ]. In addition, other people in communities can be involved in education, which can be effective in rural medical education [ , , ]. Their involvement can create opportunities to better understand the community, leading to respect for health conditions and, ultimately, to better healthcare in rural areas [ , , ]. Future studies should investigate the quality of rural family medicine education by involving various medical professionals and people in rural communities. The current study results have significant implications for future research. Accordingly, rural family medicine education should include clinical nurses for the effective provision of education, particularly regarding professionalism, interprofessional collaboration, and residents’ smooth transition to rural work conditions. Based on our research findings, clinical nurses could contribute to the professional education of medical residents through continual dialogue with them. Resident–nurse collaboration in clinical situations could drive the residents’ learning regarding interprofessional collaborations. Nurses and other professionals’ education related to rural hospitals’ work environments and competencies could mitigate conflicts among professionals. This study had several limitations. First, as it was performed at a single rural Japanese community hospital, the results may have limited transferability. However, the educational system was described in depth, which may enable its application to other settings. The second limitation is confirmability. This study was conducted primarily by a medical educator in a community hospital, and the relationship between the interviewer and interviewees may have affected the content of the interviews. To mitigate these limitations, the third author discussed the contents with the second author, who was a nursing specialist. In addition, to improve confirmability and transferability, the authors discussed and reflected on the qualitative data, concepts, and themes that reached theoretical saturation.
This study clarified that rural nurses’ ideas of their role in family medicine education may be associated with their working conditions. Rural nurses’ education of family medicine residents focused on professionalism, interprofessional collaboration, and respect for working culture and competency. Rural nurses may perceive their role in such education as challenging. Rural family medicine education should incorporate clinical nurses as educators on professionalism and interprofessional collaboration. To this end, other professionals should be more actively involved in improving the quality of education.
|
Cross-sectional survey on impact of paediatric COVID-19 among Italian paediatricians: report from the SIAIP rhino-sinusitis and conjunctivitis committee | b5e308ba-2031-485d-b4ec-c042fb577c67 | 7538039 | Pediatrics[mh] | In March 2020, the World Health Organization (WHO) declared the COVID-19 pandemic. The novel coronavirus, SARS-CoV-2, a top threat to global health, emerged in Wuhan (China) in December last year and rapidly spread worldwide . Globally, at the end of August 2020 the total confirmed cases of COVID-19 have reached over 24,765,000 with over 837,000 deaths and daily data shows continuous increases in new COVID-19 cases . In Italy, we have experienced serious outbreaks linked to the first cluster, in South Lombardy, with about 300,000 confirmed cases and more than 35,000 deaths . Research shows that COVID-19 causes symptoms including fever, dry cough, dyspnoea, fatigue, lymphopenia and in more severe cases, severe acute respiratory syndrome (SARS) and even death . Every age may be affected but childhood seems to be safeguarded by severe COVID-19, due to comorbidities associated with lethal COVID-19 infection (obesity, diabetes and chronic heart disease) . It has been reported that asthma and allergy, the most common chronic disorders in children, are not included in the top 10 comorbidities associated with COVID-19 fatalities . Nevertheless, there would seem that the concerns about asthma and the risk of disease and related outcomes are still high . Actually, data on COVID-19 in Italian children are limited and almost certainly underestimated, since they are frequently asymptomatic or presenting mild or moderate infection, similar to common cold. In order to evaluate the impact of paediatric COVID-19 among Italian paediatricians, we sent a 20-questions anonymous internet-based survey to 250 Italian paediatricians with particular address to allergic symptoms and those affecting the upper airways.
The questionnaire was conceived and pretested in April 2020, by a working group of experts of the Italian Paediatric Society for Allergy and Immunology (SIAIP) based on their personal clinical experience and on the extensive review of most relevant international literature on COVID-19 infection searched on MEDLINE, EMBASE and SCOPUS. The prior revised and confirmed paper version of the questionnaire was finally converted in a web-based survey with Google-Drive (Google Drive™,© 2012 Google Inc. all rights reserved), a free internet platform applied for the creation of internet-based survey forms which allows to have real-time digital archiving of collected data, real-time presentation of survey results, and simple download of all data of registered anonymised participants in Excel© format for statistical analysis. The questionnaire was structured into different sections of 20 categorized and multiple choice questions. The first part included questions about epidemiological data follows by a second part assessing the way to manage a suspected COVID-19 infection and personal experiences about that. The third part concerned questions about patients’ clinical characteristics and clinical manifestations. Finally, the last part focused on the knowledge in the field and educational priorities of participants. The language of the questionnaire was the national one. The reported time to complete the survey was approximately 10 min. The survey was emailed once between April and mid-May 2020 to about 250 members of the Italian Paediatric Society for Allergy and Immunology (SIAIP). Participants were allowed to complete only a single survey, duplicate entries were avoided and responses were scrupulously monitored. Informed consent was not obtained, given that the participation was voluntary. No financial incentive was offered. The Ethics Committee of the University of Bari (Italy), was contacted and no special permission was deemed to be required because the study design satisfied the criteria of an activity audit. Once the questionnaire results were obtained, they were statistically processed. Answers were converted in different categorical variables. Differences in categorical variables were evaluated with Chi square and Fisher exact tests as appropriate.SAS® University Edition (Cary, NC: SAS Institute Inc) was used for all analyses. Data are expressed as percentage, p < 0.05 were considered statistically significant.
A total 99 participants had taken part in our survey and provided responses to our electronic questionnaire by May 15th, 2020. The characteristics of the survey participants are detailed in Table . Among responders, 52% practiced in a place where there is not a Children Hospital dedicated to COVID care. 86% of respondents reported that in a month referred to them up to 10 patients for suspected SARS-CoV-2 and up to 20 for the 11% (more than 20 just for the 3%), starting from February 2020 according the majority of them (86%). In particular, the distribution of patients reported per month varies significantly according to the geographical area ( P = 0.02). Data showed that in the North part of Italy the rate of patients referred is higher than in the rest of Italy. Moreover, we found that only the infectious disease specialist reported that in a month referring more than 20 patients for suspected SARS-CoV-2 ( P < 0.0001). The diagnosis of COVID-19 is made once a month according to 34% of participants, once a week for 23% of participants, once in 2 months for 19%, once in 3 months for 10%, and once a day for 9% of participants. Almost all respondents (98%) reported they had in charge up to a maximum of 10 infecting children and the last 2% more than twenty. Among these patients, according to the 75% of responders, a maximum rate of 20% were affected by allergic rhino-conjunctivitis and in particular in the North of Italy while in the Centre and in the South there was a higher incidence ( P = 0.09). Almost the same applies for asthma, 83% of responders declared that up to a maximum of 20% of affected children were asthmatic, from 20 to 40% for the 13,5% of responders and from 40 to 60% for the last 3,5% ( Fig. ) . As for the allergic conjunctivitis also for asthma, we found a higher incidence in the Centre and in South than in the North ( P = 0.03) (Table ) . On average, these children are ≤3 years old according to 24% of participants, from ≤4 to 6 years old for the 25% of responders as well from ≤7 to 10 years old, until 15 years old for the 21% and more than 16 years old for the last 5%. Of the respondents, 90% agreed immediately isolation in a proper place and to alert the public health service system was the first step in case of a suspected infection, 13% declared to suggest just the isolation without any geographical differences. Eleven percent of respondents would refer out patients to the emergency department and the last 10% leading to an emergency call. However, 45% of participants clarified that confirmed cases of SARS-CoV-2 infection had nasopharyngeal and oropharyngeal swab sampling, 32% reported that it was not performed in suspected cases and not yet for 23% of responders. In particular, we found that the rate of not performed nasopharyngeal and oropharyngeal swab sampling is higher in the Centre and in the South than in the North ( P = 0.02). Regarding signs and symptoms suggestive of SARS-CoV-2 infection, the majority of respondents (89%) recognized fever, cough (63%) and gastrointestinal disease (37%) as main symptoms. Interesting, olfactory and gustatory dysfunctions in children are rare ( Fig. ). Finally, the majority of Italian paediatricians (85%) declared to have a good knowledge about COVID-19, however they’d all be interested to increase knowledge about the impact of COVID-19 on Italian children.
This cross-sectional survey provides information on the impact of COVID-19 among paediatricians. . A good level of knowledge in the field is linked to a successful practice. So that, evaluating knowledge, attitude and practice among paediatricians is of considerable practical importance. It should be noted that responders were allocated evenly among Italy in order to guaranteed information from all Italian regions. In addition, no significant difference has been identified with regard to management of a suspected COVID-19 case among Italy. Regarding signs and symptoms suggestive of COVID-19, our results showed that in children, unlike adults, olfactory and gustatory dysfunctions are not prevalent. These findings are in line with a recent meta-analysis which included research performed in China (just one clinical case in Singapore) . Allergic rhino-conjunctivitis and asthma, according to our data, seem not to be a risk factor to developing more severe COVID-19. However, since the role of asthma in increasing the severity of COVID-19 is still unclear, it remains a great concern for patients and paediatricians. The diagnosis and management of COVID-19 in children is still difficult due to the mild or moderate clinical course. Moreover, asymptomatic infections were not infrequent with the risk of unconfirmed disease. This seems to be a frequent problem in daily clinical practice. Nevertheless, our data showed that Italian children have good chances to be tested for SARS-CoV-2, indicating the importance of an accurate diagnosis, which will facilitate appropriate treatment options and preventive measures. Our study shows some limitations. Although almost 100 participants completed our survey, only those with access to the Internet and only those with available email addresses were recruited. Other limitations are related to our pilot survey and include the use of a non-standardised questionnaire. However, to the best of our knowledge, standardised and validated surveys on this issue are not available. Some selection bias includes the recruitment methodology; those who felt more interested about COVID-19 may have been more inclined to complete the survey.
This study is the first to provide a comprehensive review of COVID-19 knowledge and impact among paediatricians in Italy about allergic asthma and upper airway involvement. From our point of view, it provides important information clearly useful for improving a good practice. Our data confirmed that comorbidities as asthma or rhino-conjunctivitis cannot represent a risk factor for more severe COVID-19 disease. Moreover, symptoms such as anosmia and ageusia are rare in the paediatric population.
|
The efficacy of hydrogen sulfide (H | a6c04a0b-07d7-4f73-8a55-602a0ba18cb0 | 10620290 | Microbiology[mh] | Rainwater harvesting and sulfate-reducing bacteria In the twenty-first century, freshwater scarcity has continued to be a concern worldwide. It is estimated that approximately 4 billion people contend with severe water scarcity for at least 1 month out of the year (Mekonnen & Hoekstra, ). To combat water scarcity, a growing number of people have taken to rainwater harvesting in the American Southwest. While the harvested rainwater can be a valuable tool for combating water scarcity, pathways do exist for microorganisms to enter harvesting devices, which can be of concern if the water is being utilized for purposes such as irrigating edible food gardens or even as a potable source. Testing for indicator organisms, or organisms whose presence indicates a potential for pathogen presence, is one mechanism to assess biological contamination of water sources. Coliform bacteria, often described in water quality monitoring as total coliforms (TC), are a group of generally harmless bacteria found in both the environment and the gastrointestinal tracts of humans and animals (Washington State Division of Environmental Public Health, ). Escherichia coli ( E. coli ) are a specific type of coliform known as a fecal coliform. E. coli are more common in the gastrointestinal tract than in the environment and they can be pathogenic (Washington State Division of Environmental Public Health, ). The presence of Escherichia coli ( E. coli ) and total coliforms (TC) are standards for assessing microbial water quality. However, alternative indicator organisms, such as sulfate-reducing bacteria (SRB), have also been employed to assess microbial water quality in various environments. The potential for warm-blooded animals to shed SRB is what allows the organisms to serve as a potential indicator organism for water contamination (Gupta et al., ). In addition, the ease of use and lower cost of testing for SRB when compared to traditional fecal coliforms has made it a popular choice where water quality is of concern (Sobsey & Pfaender, ). Our study assessed the reliability of using SRB as an at-home indicator organism for rainwater quality evaluation, and to provide tools for homeowners to assess their rainwater quality, especially those who use harvested rainwater to irrigate their produce. SRB are prokaryotic microbes that help to facilitate nature’s sulfur cycle. In anaerobic environments, SRB utilize sulfate as a terminal electron acceptor, producing sulfide products, usually in the form of hydrogen sulfide (H 2 S). There are more than 220 known species of SRB, creating a plethora of microorganisms which could create a positive result using the SRB testing method (Barton & Fauque, ). SRB live in a wide range of environments, including oceanic waters and sediments, freshwater, brackish swamps, hydrothermal vents, hot springs, and deep subsurface soils (Fishbain et al., ). Like other residential water sources, rainwater harvesting devices are generally closed to the outside environment to reduce contamination. However, rainwater harvesting systems in Arizona, USA, are particularly hospitable to SRB, due to ambient temperatures (from a low of 17.5°C (290.65K) to a high of 30.1°C (303.25K)) which are within the 28°C (301.15K) to 30°C (303.15K) optimal growth range for SRB (U.S. National Weather Service, ; Virpiranta et al., ). About Project Harvest The University of Arizona’s Project Harvest (PH), in partnership with Sonora Environmental Research Institute, Inc. (SERI), was designed as a co-created citizen science (CS) project focused on evaluating potential microbiological, organic, and inorganic pollutants in harvested rainwater, as well as in irrigated soil and grown plants. An integral part of Project Harvest involves training homeowners to test, and interpret testing results of, their harvested rainwater samples. This work describes efforts to determine if easy-to-use-and-interpret, low-cost SRB tests, such as Hach’s PathoScreen™ field test kits, are a viable alternative for at-home testing of microbial quality of harvested rainwater.
In the twenty-first century, freshwater scarcity has continued to be a concern worldwide. It is estimated that approximately 4 billion people contend with severe water scarcity for at least 1 month out of the year (Mekonnen & Hoekstra, ). To combat water scarcity, a growing number of people have taken to rainwater harvesting in the American Southwest. While the harvested rainwater can be a valuable tool for combating water scarcity, pathways do exist for microorganisms to enter harvesting devices, which can be of concern if the water is being utilized for purposes such as irrigating edible food gardens or even as a potable source. Testing for indicator organisms, or organisms whose presence indicates a potential for pathogen presence, is one mechanism to assess biological contamination of water sources. Coliform bacteria, often described in water quality monitoring as total coliforms (TC), are a group of generally harmless bacteria found in both the environment and the gastrointestinal tracts of humans and animals (Washington State Division of Environmental Public Health, ). Escherichia coli ( E. coli ) are a specific type of coliform known as a fecal coliform. E. coli are more common in the gastrointestinal tract than in the environment and they can be pathogenic (Washington State Division of Environmental Public Health, ). The presence of Escherichia coli ( E. coli ) and total coliforms (TC) are standards for assessing microbial water quality. However, alternative indicator organisms, such as sulfate-reducing bacteria (SRB), have also been employed to assess microbial water quality in various environments. The potential for warm-blooded animals to shed SRB is what allows the organisms to serve as a potential indicator organism for water contamination (Gupta et al., ). In addition, the ease of use and lower cost of testing for SRB when compared to traditional fecal coliforms has made it a popular choice where water quality is of concern (Sobsey & Pfaender, ). Our study assessed the reliability of using SRB as an at-home indicator organism for rainwater quality evaluation, and to provide tools for homeowners to assess their rainwater quality, especially those who use harvested rainwater to irrigate their produce. SRB are prokaryotic microbes that help to facilitate nature’s sulfur cycle. In anaerobic environments, SRB utilize sulfate as a terminal electron acceptor, producing sulfide products, usually in the form of hydrogen sulfide (H 2 S). There are more than 220 known species of SRB, creating a plethora of microorganisms which could create a positive result using the SRB testing method (Barton & Fauque, ). SRB live in a wide range of environments, including oceanic waters and sediments, freshwater, brackish swamps, hydrothermal vents, hot springs, and deep subsurface soils (Fishbain et al., ). Like other residential water sources, rainwater harvesting devices are generally closed to the outside environment to reduce contamination. However, rainwater harvesting systems in Arizona, USA, are particularly hospitable to SRB, due to ambient temperatures (from a low of 17.5°C (290.65K) to a high of 30.1°C (303.25K)) which are within the 28°C (301.15K) to 30°C (303.15K) optimal growth range for SRB (U.S. National Weather Service, ; Virpiranta et al., ).
The University of Arizona’s Project Harvest (PH), in partnership with Sonora Environmental Research Institute, Inc. (SERI), was designed as a co-created citizen science (CS) project focused on evaluating potential microbiological, organic, and inorganic pollutants in harvested rainwater, as well as in irrigated soil and grown plants. An integral part of Project Harvest involves training homeowners to test, and interpret testing results of, their harvested rainwater samples. This work describes efforts to determine if easy-to-use-and-interpret, low-cost SRB tests, such as Hach’s PathoScreen™ field test kits, are a viable alternative for at-home testing of microbial quality of harvested rainwater.
Community recruitment and training Recruitment for the project occurred throughout four Arizona, USA, communities: Dewey-Humboldt, Globe/Miami, Hayden/Winkelman, and Tucson. These communities were selected based on several critical factors, including their proximity to Toxic Release Inventory (TRI) sites and National Priorities List (NPL) sites, the research interest and/or concern expressed by community members, and previously established relationships between the PI and community leadership and members (Davis et al., ). Project Harvest employed a promotoras de salud (community health worker) model to better facilitate communication with partnered communities. Promotoras assisted in primary duties such as sample collection, recruitment, and training of participants. Participants were consented under the University of Arizona Institutional Review Board rule, ensuring the rights and welfare of human participants in research. Before rainwater collection began, project participants in each community were supplied with kits which included all materials required to complete the sample collection and/or analysis (see https://projectharvest.arizona.edu/about#sampling-methodologies for details) Microbial rainwater assessment In the first 2 years (2017–2019), PH participants were randomly assigned into one of two method analysis categories: Do-it-Yourself (DIY) and traditional lab (Lab). For year three (2019–2020), participants had the option of which method they preferred. One objective of both DIY and Lab participants was to determine the microbial quality of harvested rainwater as measured by the presence of indicator organisms. DIY participants were provided Hach PathoScreen™ field test kits (HACH, Loveland, CO) to analyze their collected rainwater samples at home for the presence and activity of SRB. Participants were instructed to report the data back to the University research team. In contrast, samples collected by Lab participants were transported to the University of Arizona, where the project team analyzed them using IDEXX Colilert® (IDEXX Laboratories, Westbrook, ME) for E. coli and TC bacteria. Initially, the project was designed with the intention for participants to complete both the Lab and DIY methodologies in year three (2019–2020) creating a direct one-to-one comparison for validation. However, participant and promotora feedback (details below) revealed participant fatigue, making it no longer feasible to have participants conduct both methods. Instead, between 2018 and 2020, we analyzed the submitted Lab samples using both the IDEXX Colilert® and the Hach PathoScreen™ methods in order to validate the use of a low-cost, at-home alternative methodology for testing harvested rainwater. Both Lab and DIY participants submitted samples during December and February for the winter season, and June and September for the monsoon season, encompassing the major precipitation periods in Arizona, USA. E. coli and total coliform rainwater quality assessment (Lab method) The Lab microbial analyses consisted of collecting water from rainwater harvesting tanks in a sterile plastic 250-ml sample bottle. Lab participants were instructed to drop off samples at designated points in their respective communities for retrieval and analysis by the University team. To validate the SRB method for harvested rainwater samples, Lab samples were tested for both TC and E. coli using Colilert®, and for SRB using PathoScreen™. All microbiological testing was done in accordance with manufacturer guidelines (IDEXX Laboratories, ; Hach., ). Colilert® results range between <1.0 MPN/100ml (lower limit of detection (LOD)) and >2419.6 MPN/100ml (upper limit of quantification (ULOQ)). Similarly, PathoScreen™ has a LOD of <1.1 MPN/100ml and an ULOQ of >8.0 MPN/100ml. SRB rainwater quality assessment (DIY method) While completing the DIY method at home, participants collected rainwater samples in a sterile plastic 250-ml sample bottle. They then transferred 20 ml of their sample into five pre-marked sterile 25-ml glass vials, followed by addition of a powdered substrate supplied by the manufacturer and gently swirling to dissolve the substrate into the water sample. Participants then incubated the vials in a location at their home with a constant temperature between 25 and 30°C (298.15–303.15 K). Once a day, for 5 days, participants recorded the ambient temperature and any color change of water within each vial. Samples were “positive” if a change from the original amber color to black occurred, or if black precipitates were observed in any vials (indicative of sulfate reduction to form iron sulfides) (Fig. ). If color change or black precipitates were observed, the positive vials were compared to a most probable number (MPN) chart (Fig. ) to determine the concentration of SRB in the sample. Once the participant completed the experiment at home, they submitted their results, with the option to send vial pictures, to the Project Harvest research team via the Project Harvest website and journal entry portal, email, text message, or a physically mailed worksheet. Once the online portal system was established, we recognized that technological comfort and access may prove to be a barrier for returning results. Alternatively, we opted to receive DIY-tested SRB results through a paper worksheet, which was added as a submission method. While this did not allow us to get photos for validation, it improved results submissions. Participant interaction and preference In addition to ongoing data sharing and participant contact, to better garner and understand participant sampling method preference (DIY or Lab) and rationale, we hosted “Open House” events towards the end of Year 2 (2018–2019). At these events, participants were prompted to sign up for the kit of their choice for the third sampling year (2019–2020). Participants who were not in attendance were given the option to be interviewed by phone, and the remaining participants were contacted through a text campaign, prompting the selection of which method, DIY or Lab, they would like for Year 3. Participants who did not respond with a kit preference were divided into two groups. Those who were present since the beginning of the study were automatically assigned a Lab kit. To ensure all participants a chance to experience both kit types, those who joined in Year 2 were assigned the opposite kit from the one they recently completed. Statistical analysis on water quality methods For statistical comparison, microbial results below the LOD for both tests were recorded as half of the LOD (e.g. <1.0 MPN for Colilert® was recorded as 0.5 MPN), and the ULOQ was rounded to 2420.0 and 8.1 for Colilert® and PathoScreen™ respectively. Taking half the limit of detection is a standard practice for dealing with censored values in environmental monitoring; separately, the ULOQ decision was made to ensure all censored values were constant (Croghan & Egeghy, ). Results from both Colilert® and PathoScreen™ were recorded in Microsoft Excel (Seattle, WA, 2016 Version 16.0) spreadsheets. The results were then uploaded into R Studio software (Boston, MA, 2020 Version 3.6.3) for statistical processing. The relationship between E. coli MPN and SRB MPN was measured using a Spearman rank correlation test as values above the ULOQ were not discretely known without a dilution series. Spearman’s serves as a non-parametric test for measuring the monotonic relationship between two variables. Presence/absence categories for both tests were also recorded, and the subsequent data was tested for association using Pearson’s chi-square test. Finally, a point-biserial correlation was conducted to determine the correlation between Colilert® MPN and SRB presence/absence.
Recruitment for the project occurred throughout four Arizona, USA, communities: Dewey-Humboldt, Globe/Miami, Hayden/Winkelman, and Tucson. These communities were selected based on several critical factors, including their proximity to Toxic Release Inventory (TRI) sites and National Priorities List (NPL) sites, the research interest and/or concern expressed by community members, and previously established relationships between the PI and community leadership and members (Davis et al., ). Project Harvest employed a promotoras de salud (community health worker) model to better facilitate communication with partnered communities. Promotoras assisted in primary duties such as sample collection, recruitment, and training of participants. Participants were consented under the University of Arizona Institutional Review Board rule, ensuring the rights and welfare of human participants in research. Before rainwater collection began, project participants in each community were supplied with kits which included all materials required to complete the sample collection and/or analysis (see https://projectharvest.arizona.edu/about#sampling-methodologies for details)
In the first 2 years (2017–2019), PH participants were randomly assigned into one of two method analysis categories: Do-it-Yourself (DIY) and traditional lab (Lab). For year three (2019–2020), participants had the option of which method they preferred. One objective of both DIY and Lab participants was to determine the microbial quality of harvested rainwater as measured by the presence of indicator organisms. DIY participants were provided Hach PathoScreen™ field test kits (HACH, Loveland, CO) to analyze their collected rainwater samples at home for the presence and activity of SRB. Participants were instructed to report the data back to the University research team. In contrast, samples collected by Lab participants were transported to the University of Arizona, where the project team analyzed them using IDEXX Colilert® (IDEXX Laboratories, Westbrook, ME) for E. coli and TC bacteria. Initially, the project was designed with the intention for participants to complete both the Lab and DIY methodologies in year three (2019–2020) creating a direct one-to-one comparison for validation. However, participant and promotora feedback (details below) revealed participant fatigue, making it no longer feasible to have participants conduct both methods. Instead, between 2018 and 2020, we analyzed the submitted Lab samples using both the IDEXX Colilert® and the Hach PathoScreen™ methods in order to validate the use of a low-cost, at-home alternative methodology for testing harvested rainwater. Both Lab and DIY participants submitted samples during December and February for the winter season, and June and September for the monsoon season, encompassing the major precipitation periods in Arizona, USA.
and total coliform rainwater quality assessment (Lab method) The Lab microbial analyses consisted of collecting water from rainwater harvesting tanks in a sterile plastic 250-ml sample bottle. Lab participants were instructed to drop off samples at designated points in their respective communities for retrieval and analysis by the University team. To validate the SRB method for harvested rainwater samples, Lab samples were tested for both TC and E. coli using Colilert®, and for SRB using PathoScreen™. All microbiological testing was done in accordance with manufacturer guidelines (IDEXX Laboratories, ; Hach., ). Colilert® results range between <1.0 MPN/100ml (lower limit of detection (LOD)) and >2419.6 MPN/100ml (upper limit of quantification (ULOQ)). Similarly, PathoScreen™ has a LOD of <1.1 MPN/100ml and an ULOQ of >8.0 MPN/100ml.
While completing the DIY method at home, participants collected rainwater samples in a sterile plastic 250-ml sample bottle. They then transferred 20 ml of their sample into five pre-marked sterile 25-ml glass vials, followed by addition of a powdered substrate supplied by the manufacturer and gently swirling to dissolve the substrate into the water sample. Participants then incubated the vials in a location at their home with a constant temperature between 25 and 30°C (298.15–303.15 K). Once a day, for 5 days, participants recorded the ambient temperature and any color change of water within each vial. Samples were “positive” if a change from the original amber color to black occurred, or if black precipitates were observed in any vials (indicative of sulfate reduction to form iron sulfides) (Fig. ). If color change or black precipitates were observed, the positive vials were compared to a most probable number (MPN) chart (Fig. ) to determine the concentration of SRB in the sample. Once the participant completed the experiment at home, they submitted their results, with the option to send vial pictures, to the Project Harvest research team via the Project Harvest website and journal entry portal, email, text message, or a physically mailed worksheet. Once the online portal system was established, we recognized that technological comfort and access may prove to be a barrier for returning results. Alternatively, we opted to receive DIY-tested SRB results through a paper worksheet, which was added as a submission method. While this did not allow us to get photos for validation, it improved results submissions.
In addition to ongoing data sharing and participant contact, to better garner and understand participant sampling method preference (DIY or Lab) and rationale, we hosted “Open House” events towards the end of Year 2 (2018–2019). At these events, participants were prompted to sign up for the kit of their choice for the third sampling year (2019–2020). Participants who were not in attendance were given the option to be interviewed by phone, and the remaining participants were contacted through a text campaign, prompting the selection of which method, DIY or Lab, they would like for Year 3. Participants who did not respond with a kit preference were divided into two groups. Those who were present since the beginning of the study were automatically assigned a Lab kit. To ensure all participants a chance to experience both kit types, those who joined in Year 2 were assigned the opposite kit from the one they recently completed.
For statistical comparison, microbial results below the LOD for both tests were recorded as half of the LOD (e.g. <1.0 MPN for Colilert® was recorded as 0.5 MPN), and the ULOQ was rounded to 2420.0 and 8.1 for Colilert® and PathoScreen™ respectively. Taking half the limit of detection is a standard practice for dealing with censored values in environmental monitoring; separately, the ULOQ decision was made to ensure all censored values were constant (Croghan & Egeghy, ). Results from both Colilert® and PathoScreen™ were recorded in Microsoft Excel (Seattle, WA, 2016 Version 16.0) spreadsheets. The results were then uploaded into R Studio software (Boston, MA, 2020 Version 3.6.3) for statistical processing. The relationship between E. coli MPN and SRB MPN was measured using a Spearman rank correlation test as values above the ULOQ were not discretely known without a dilution series. Spearman’s serves as a non-parametric test for measuring the monotonic relationship between two variables. Presence/absence categories for both tests were also recorded, and the subsequent data was tested for association using Pearson’s chi-square test. Finally, a point-biserial correlation was conducted to determine the correlation between Colilert® MPN and SRB presence/absence.
SRB vs coliform bacteria (Lab method) As previously stated, we modified our approach to validate the SRB method. In the second and third year, 2018–2020, a total of 226 samples were collected by Lab participants and submitted to the University of Arizona. These samples were tested for SRB and TC utilizing both DIY and Lab methodologies. The majority ( n =200, 88.5%) of collected samples were negative for SRB, only 26 (11.5%) were positive (Table ). The average MPN for all SRB samples was 1.1 MPN/100 ml (Fig. ). Samples were tested for both TC and E. coli concentrations, then compared to SRB results. SRB MPN had a positive correlation ( r =0.245, p <0.05) with TC MPN (Fig. ). There was no discernable correlation with E. coli MPN and SRB MPN in harvested rainwater (Fig. ). After MPN comparisons, the SRB results were given a presence/absence comparison against TC and E. coli presence/absence (Tables and ). In 82 samples (36.3%), neither coliform bacteria nor SRB were detected. In 118 samples (52.2%), Colilert® detected coliform bacteria but no SRB were detected via the DIY method. In 5 samples (2.2%), Colilert® did not detect any coliform bacteria, and the DIY method did detect SRB. In 21 samples (9.3%), coliform bacteria and SRB were both detected (Table ). This produced an approximate 45.6% agreement rate between TC and SRB for detecting water contamination. The majority, 184 samples (81.4%), were negative for both E. coli and SRB (Table ). Conversely, only 4 samples (1.8%) were positive for both E. coli via Colilert® and SRB via PathoScreen™ (Table ). Bacterial load did vary by seasonality, though the presence/absence of our target organisms was fairly consistent across the two rainy seasons (Tables and ). Overall, SRB presence/absence had a positive correlation ( r =0.1428, p <0.05) with TC presence/absence. When tested against E. coli presence/absence, SRB presence/absence did not have a statistically significant correlation. Participant self-reported results Between 2017 and 2020, DIY participants reported their at-home results for 229 SRB tests. The majority ( n =182, 79.5%) of participant samples were negative for SRB, only 47 (20.5%) were positive (Table ). The average MPN for all participant-submitted SRB samples was 1.51 MPN/100 ml (Fig. ). The median for those samples was <1.1 MPN/100 ml, and the geometric mean was also <1.1 MPN/100 ml. SRB MPN varied with seasonality, though this was not significant ( p >0.05) (Table ). Participant reported feedback Participants were given the option to choose either the DIY or Lab method. Thirty-eight (60.3%) participants chose to complete the Lab method, 17 (27.0%) selected a DIY method, and 8 (12.7%) chose to complete both methods for Year 3. The remainder of ( n =91) were assigned the Lab method.
As previously stated, we modified our approach to validate the SRB method. In the second and third year, 2018–2020, a total of 226 samples were collected by Lab participants and submitted to the University of Arizona. These samples were tested for SRB and TC utilizing both DIY and Lab methodologies. The majority ( n =200, 88.5%) of collected samples were negative for SRB, only 26 (11.5%) were positive (Table ). The average MPN for all SRB samples was 1.1 MPN/100 ml (Fig. ). Samples were tested for both TC and E. coli concentrations, then compared to SRB results. SRB MPN had a positive correlation ( r =0.245, p <0.05) with TC MPN (Fig. ). There was no discernable correlation with E. coli MPN and SRB MPN in harvested rainwater (Fig. ). After MPN comparisons, the SRB results were given a presence/absence comparison against TC and E. coli presence/absence (Tables and ). In 82 samples (36.3%), neither coliform bacteria nor SRB were detected. In 118 samples (52.2%), Colilert® detected coliform bacteria but no SRB were detected via the DIY method. In 5 samples (2.2%), Colilert® did not detect any coliform bacteria, and the DIY method did detect SRB. In 21 samples (9.3%), coliform bacteria and SRB were both detected (Table ). This produced an approximate 45.6% agreement rate between TC and SRB for detecting water contamination. The majority, 184 samples (81.4%), were negative for both E. coli and SRB (Table ). Conversely, only 4 samples (1.8%) were positive for both E. coli via Colilert® and SRB via PathoScreen™ (Table ). Bacterial load did vary by seasonality, though the presence/absence of our target organisms was fairly consistent across the two rainy seasons (Tables and ). Overall, SRB presence/absence had a positive correlation ( r =0.1428, p <0.05) with TC presence/absence. When tested against E. coli presence/absence, SRB presence/absence did not have a statistically significant correlation.
Between 2017 and 2020, DIY participants reported their at-home results for 229 SRB tests. The majority ( n =182, 79.5%) of participant samples were negative for SRB, only 47 (20.5%) were positive (Table ). The average MPN for all participant-submitted SRB samples was 1.51 MPN/100 ml (Fig. ). The median for those samples was <1.1 MPN/100 ml, and the geometric mean was also <1.1 MPN/100 ml. SRB MPN varied with seasonality, though this was not significant ( p >0.05) (Table ).
Participants were given the option to choose either the DIY or Lab method. Thirty-eight (60.3%) participants chose to complete the Lab method, 17 (27.0%) selected a DIY method, and 8 (12.7%) chose to complete both methods for Year 3. The remainder of ( n =91) were assigned the Lab method.
Sulfur, sulfate, and SRB in the home environment While we did observe positive results, particularly between total coliforms and sulfate-reducing bacteria, there are other sources of sulfates in the environment. In Arizona, there are potential mechanisms in which sulfates may enter rainwater harvesting systems. As sulfates are the main electron acceptor for SRB in anaerobic environments, environments containing them provide a natural habitat for their growth (Phyo et al., ). There exist several pathways for potential sulfate contact with water sources in residential settings. In Arizona, mining is a dominant industry, with 380 active mines recorded in 2019 (Richardson et al., ). One common byproduct of the mining industry is acid rock drainage (ARD) in surface waters, which contains sulfuric acid (H 2 SO 4 ), among other compounds (Dos Santos et al., ). Mine tailings piles also commonly contain sulfide compounds such as pyrite (FeS 2 ), which upon exposure to a humid atmosphere can oxidize to ARD (Dos Santos et al., ; Lim et al., ). Tailings piles are present in the Project Harvest partner communities, where eolian processes may contribute to deposition on participant roofs, which in turn can be washed into water harvesting systems during rainfall events. Another mechanism for increasing sulfate concentration around the home and in water harvesting systems is the combustion of fossils fuels containing sulfur (Perraud et al., ). The refining process of ores creates sulfur dioxide (SO 2 ), among other byproducts, which in turn increases acid rain and acidic particle dry deposition, and potentially bioaerosols, bringing sulfates onto rooftops and into water harvesting devices (U.S. Environmental Protection Agency, ). Nearby facilities that would create SO 2 exist in the participating communities of Hayden (i.e. ASARCO Smelter) and Miami (e.g. Freeport-McMoRan mine), AZ (Arizona Department of Environmental Quality, ). In urban areas, the combustion of fossil fuels is of greater concern, and SO 2 concentrations are generally the result of vehicles, industrial facilities, and power generation from coal and to a lesser extent natural gas (U.S. Environmental Protection Agency, ). Comparison of SRB and coliform testing from lab categorized participants Determining potable water quality is centralized around testing for indicator organisms, such as TC or E. coli . The reason for the historic use of these organisms is their presence in the gastrointestinal tracts and fecal waste of humans and animals (Ohrel & Register, ). Conversely, SRB were borne out of the need for a lower cost test, one which could be performed simply and without the need for a laboratory setting (Gupta et al., ). While SRB are naturally occurring in the environment, they are still commonly present in the gastrointestinal tract, and therefore a recognized sign that fecal contamination has occurred (Gupta et al., ). Project Harvest Lab sample results revealed that when coliform bacteria were present in a harvested rainwater sample, SRB were determined to be present 15.1% (21/139 samples) of the time, while that percentage increases slightly when SRB are compared to samples positive for E. coli , at 20.0% (4/20 samples). However, the E. coli -SRB comparison set concluded that there was no association between the two organisms’ presences ( p >0.05). Based on Spearman’s test, there were a few significant correlations with traditional TC MPN tests and the SRB MPN method. Spearman’s test and the chi-square test revealed a positive ( r =0.245, p <0.05) correlation with TC MPN and SRB MPN, and a positive association ( r =0.143, p <0.05) between TC presence/absence and SRB presence/absence, respectively. The point-biserial test also revealed a positive ( r =0.197, p <0.05) correlation between TC MPN and SRB presence/absence. Our data set does show a departure from literature, which generally displays moderate to strong correlations between TC and SRB. Khush et al. ( ), observed that when TC concentrations were intermediate to high (CFU≥1000/100 ml), SRB methods showed increasing sensitivity. In that experiment, the samples were collected from rural Southern India and compared presence/absence of SRB tests against the enumeration of TC (Khush et al., ). Another comparison of SRB tests and the Colilert® method in contaminated tap water in Indonesia determined that Colilert and SRB methods were qualitatively and quantitatively equal in their sensitivity of recovering their respective indicator organisms (Kromoredjo & Fujioka, ). Overall, literature generally finds that SRB methods work for water quality testing (Table ); however, the level of agreement among studies does vary (e.g. Sobsey & Pfaender, ). The aspects that most SRB studies agree on are that the tests are lower cost, have lower technological and training requirements, and have shorter time windows for results, and that SRB tests correlate well with traditional TC tests in environments where higher concentrations of fecal matter are of concern. Participant preference for Lab or DIY methods Since one of the goals of this research was to determine if SRB tests can function as an at-home low-cost and low-effort test, understanding participant ease and comfort with the method is important to consider when determining whether PathoScreen™ could be an effective alternative. Among those who indicated preference and rationale, Lab kits were primarily chosen ( n =38, 60.3%) because they were easier to complete ( n =7), less time consuming ( n =4), and provided more contaminant concentration data, 33 contaminants vs. two ( n =2). Of the 27% ( n =17) who selected DIY, the flexibility in sampling and time frame ( n =3) was the primary reason given. Finally, some participants asked to complete both kits as originally designed ( n =8, 12.7%). Participants who selected both kits stated that they enjoyed conducting scientific activities ( n =4) and were interested in receiving more harvested rainwater data ( n =2). Overall, the primary factor that influenced participant kit selection was time commitment and flexibility. The DIY method requires participation across 5 days, while the Lab method involves filling a bottle and dropping it off at the designated community location. Conversely, DIY participants had their results on day 5, while Lab participants received results at the end of the year during data sharing events. Less frequently mentioned was the ability to complete kits with family, allowing for the study to serve as an interactive tool and bonding experience. Study limitations In general, the low number of samples positive for E. coli (20/229) in the overall sample set may have limited our ability to ascertain certain statistical trends. As previously stated, the initial study design intended for participants to complete both the DIY and Lab methods. However, based on participant and promotora feedback, it was decided to modify this approach, as the team recognized that having participants do both methods was not ideal and could be a burden to participants. An adjustment was made to utilize the participant Lab kit samples to perform both methods in the University lab. After delivery to the laboratory, samples were first processed for Colilert®, due to our obligation to report back lab-tested results to participants, and then turbidity, which required 10 ml. The remainder of the sample (100 ml) was then tested using the SRB method, which resulted in the exclusion of samples submitted with less than 210 ml of water. Originally, we anticipated that participants would submit photos of their DIY results and this would serve as a mechanism for results validation. In total, only seven photos of DIY SRB vials were submitted with results, though the vast majority of DIY participants did submit their numeric results (an example of participant-submitted images can be seen in Fig. ). While the seven submitted images are a small sample of the 229 submitted results for the DIY participants, of that group most participants (85%) were interpreting MPN results correctly and all interpreted presence/absence properly. Future studies should consider the target communities’ comfort with and access to technology, as well as technological literacy.
While we did observe positive results, particularly between total coliforms and sulfate-reducing bacteria, there are other sources of sulfates in the environment. In Arizona, there are potential mechanisms in which sulfates may enter rainwater harvesting systems. As sulfates are the main electron acceptor for SRB in anaerobic environments, environments containing them provide a natural habitat for their growth (Phyo et al., ). There exist several pathways for potential sulfate contact with water sources in residential settings. In Arizona, mining is a dominant industry, with 380 active mines recorded in 2019 (Richardson et al., ). One common byproduct of the mining industry is acid rock drainage (ARD) in surface waters, which contains sulfuric acid (H 2 SO 4 ), among other compounds (Dos Santos et al., ). Mine tailings piles also commonly contain sulfide compounds such as pyrite (FeS 2 ), which upon exposure to a humid atmosphere can oxidize to ARD (Dos Santos et al., ; Lim et al., ). Tailings piles are present in the Project Harvest partner communities, where eolian processes may contribute to deposition on participant roofs, which in turn can be washed into water harvesting systems during rainfall events. Another mechanism for increasing sulfate concentration around the home and in water harvesting systems is the combustion of fossils fuels containing sulfur (Perraud et al., ). The refining process of ores creates sulfur dioxide (SO 2 ), among other byproducts, which in turn increases acid rain and acidic particle dry deposition, and potentially bioaerosols, bringing sulfates onto rooftops and into water harvesting devices (U.S. Environmental Protection Agency, ). Nearby facilities that would create SO 2 exist in the participating communities of Hayden (i.e. ASARCO Smelter) and Miami (e.g. Freeport-McMoRan mine), AZ (Arizona Department of Environmental Quality, ). In urban areas, the combustion of fossil fuels is of greater concern, and SO 2 concentrations are generally the result of vehicles, industrial facilities, and power generation from coal and to a lesser extent natural gas (U.S. Environmental Protection Agency, ).
Determining potable water quality is centralized around testing for indicator organisms, such as TC or E. coli . The reason for the historic use of these organisms is their presence in the gastrointestinal tracts and fecal waste of humans and animals (Ohrel & Register, ). Conversely, SRB were borne out of the need for a lower cost test, one which could be performed simply and without the need for a laboratory setting (Gupta et al., ). While SRB are naturally occurring in the environment, they are still commonly present in the gastrointestinal tract, and therefore a recognized sign that fecal contamination has occurred (Gupta et al., ). Project Harvest Lab sample results revealed that when coliform bacteria were present in a harvested rainwater sample, SRB were determined to be present 15.1% (21/139 samples) of the time, while that percentage increases slightly when SRB are compared to samples positive for E. coli , at 20.0% (4/20 samples). However, the E. coli -SRB comparison set concluded that there was no association between the two organisms’ presences ( p >0.05). Based on Spearman’s test, there were a few significant correlations with traditional TC MPN tests and the SRB MPN method. Spearman’s test and the chi-square test revealed a positive ( r =0.245, p <0.05) correlation with TC MPN and SRB MPN, and a positive association ( r =0.143, p <0.05) between TC presence/absence and SRB presence/absence, respectively. The point-biserial test also revealed a positive ( r =0.197, p <0.05) correlation between TC MPN and SRB presence/absence. Our data set does show a departure from literature, which generally displays moderate to strong correlations between TC and SRB. Khush et al. ( ), observed that when TC concentrations were intermediate to high (CFU≥1000/100 ml), SRB methods showed increasing sensitivity. In that experiment, the samples were collected from rural Southern India and compared presence/absence of SRB tests against the enumeration of TC (Khush et al., ). Another comparison of SRB tests and the Colilert® method in contaminated tap water in Indonesia determined that Colilert and SRB methods were qualitatively and quantitatively equal in their sensitivity of recovering their respective indicator organisms (Kromoredjo & Fujioka, ). Overall, literature generally finds that SRB methods work for water quality testing (Table ); however, the level of agreement among studies does vary (e.g. Sobsey & Pfaender, ). The aspects that most SRB studies agree on are that the tests are lower cost, have lower technological and training requirements, and have shorter time windows for results, and that SRB tests correlate well with traditional TC tests in environments where higher concentrations of fecal matter are of concern.
Since one of the goals of this research was to determine if SRB tests can function as an at-home low-cost and low-effort test, understanding participant ease and comfort with the method is important to consider when determining whether PathoScreen™ could be an effective alternative. Among those who indicated preference and rationale, Lab kits were primarily chosen ( n =38, 60.3%) because they were easier to complete ( n =7), less time consuming ( n =4), and provided more contaminant concentration data, 33 contaminants vs. two ( n =2). Of the 27% ( n =17) who selected DIY, the flexibility in sampling and time frame ( n =3) was the primary reason given. Finally, some participants asked to complete both kits as originally designed ( n =8, 12.7%). Participants who selected both kits stated that they enjoyed conducting scientific activities ( n =4) and were interested in receiving more harvested rainwater data ( n =2). Overall, the primary factor that influenced participant kit selection was time commitment and flexibility. The DIY method requires participation across 5 days, while the Lab method involves filling a bottle and dropping it off at the designated community location. Conversely, DIY participants had their results on day 5, while Lab participants received results at the end of the year during data sharing events. Less frequently mentioned was the ability to complete kits with family, allowing for the study to serve as an interactive tool and bonding experience.
In general, the low number of samples positive for E. coli (20/229) in the overall sample set may have limited our ability to ascertain certain statistical trends. As previously stated, the initial study design intended for participants to complete both the DIY and Lab methods. However, based on participant and promotora feedback, it was decided to modify this approach, as the team recognized that having participants do both methods was not ideal and could be a burden to participants. An adjustment was made to utilize the participant Lab kit samples to perform both methods in the University lab. After delivery to the laboratory, samples were first processed for Colilert®, due to our obligation to report back lab-tested results to participants, and then turbidity, which required 10 ml. The remainder of the sample (100 ml) was then tested using the SRB method, which resulted in the exclusion of samples submitted with less than 210 ml of water. Originally, we anticipated that participants would submit photos of their DIY results and this would serve as a mechanism for results validation. In total, only seven photos of DIY SRB vials were submitted with results, though the vast majority of DIY participants did submit their numeric results (an example of participant-submitted images can be seen in Fig. ). While the seven submitted images are a small sample of the 229 submitted results for the DIY participants, of that group most participants (85%) were interpreting MPN results correctly and all interpreted presence/absence properly. Future studies should consider the target communities’ comfort with and access to technology, as well as technological literacy.
Most SRB studies previously conducted were to determine fecal contamination in surface water (e.g. Sobsey & Pfaender, ). With regard to harvested rainwater, fecal contamination (from rodents, avian, and reptilian species) is one pathway, but not the sole pathway for contamination. Particles assimilated by falling rainwater, and eolian deposition of nearby mine tailings dust, pose potential supplementary sources for contamination in our partner communities. While the TC presence/absence test had a low level of agreement with the SRB presence/absence test (45.5%), there was an association between the two tests. Conversely, the E. coli presence/absence test had a high level of agreement (83.2%) with SRB tests, but no correlation, indicating that the agreement is likely owing to harvested rainwater samples lacking both E. coli and SRB. The simplicity and safety of SRB tests (e.g. Hach’s Pathoscreen) do bode well for use by lightly trained personnel. However, early in Project Harvest, it became clear that internet access and technological literacy/comfort may have served as barriers to participation and access. There are currently no standards for harvested rainwater and communities that make use of rainwater have water safety concerns. The SRB method could be used to screen rainwater quality for at-home use, specifically for those who use harvested rainwater for irrigation. The low sensitivity of SRB tests makes it difficult to say that SRB tests can unequivocally be suitable for harvested rainwater testing. The SRB method could, however, be recommended if certain conditions are met including (1) the person commonly notices animals on or around their roof/harvesting system, (2) cost is a barrier, and (3) and there is a lack of access to more advanced testing methods.
|
A review on the molecular diagnostics of Lynch syndrome: a central role for the pathology laboratory | 4c6d5ccd-8fcd-4fd2-ad1d-c737ccb68d89 | 3837620 | Pathology[mh] | Colorectal cancer (CRC) is the most common malignancy within the European Union and ranks second to lung cancer as a cause of cancer-related mortality . CRC results from both genetic and environmental factors. The most common genetic susceptibility for CRC is Lynch syndrome (LS), formerly known as hereditary non-polyposis colorectal cancer (HNPCC). LS accounts for approximately 3% of all CRCs , and also for 2% of all endometrial cancers . The burden of LS is considerably greater than these percentages imply, as the cancers are diagnosed at a young age and synchronous or metachronous malignancies occur in 30% of the patients . LS is characterized by a high lifetime risk for the development of CRC (20–70%), endometrial cancer (15–70%) and other extra-colonic cancers (<15%) [ – ]. These extra-colonic malignancies include carcinomas of the small intestine, stomach, pancreas and biliary tract, ovarium, brain, upper urinary tract and skin. LS is caused by germline mutations in mismatch repair (MMR) genes , and the definitive diagnosis is currently made by identification of an inactivating germline mutation in one of the MMR genes MLH1 , MSH2 , MSH6 or PMS2 . Early detection of LS is of great importance, particularly in pre-symptomatic mutation carriers, since colonoscopic surveillance has proven to reduce CRC morbidity and mortality by 65–70%[ – ] and prophylactic surgery may prevent endometrial and ovarium carcinoma effectively . Individuals with a predisposing mutation are candidates for participation in surveillance programs. The diagnosis of LS is hampered by the absence of specific diagnostic features and the first manifestation in many patients is the presence of an advanced cancer. Furthermore, DNA mutation analysis is time consuming and expensive. For these reasons, DNA analysis is generally preceded by a molecular diagnostic work-up to select patients as candidates for genetic tests. This molecular diagnostic work-up may be guided by several clinical and pathological criteria such as the presence of LS associated malignancies, number of malignancies and age at cancer diagnosis, family history as well as histological tumour features such as mucinous or signet-ring differentiation. In this review, we address the central role for the pathologist in the selection of patients for germline diagnostics of LS, the molecular analyses to identify LS as well as the molecular basis of LS.
Different models and strategies have been developed to identify patients with LS. In 1990, the Amsterdam Criteria I were developed to provide a basis for uniformity in collaborative studies to find the disease-causing gene ( ) . These criteria were designed to be highly specific at the expense of the sensitivity . They were criticized because extra-colonic tumours were not taken into account, thereby excluding classical LS families. Therefore, the Amsterdam Criteria II were established in 1999 ( ) . However, many families with the syndrome ( i.e. mutation carriers) do not meet these criteria , usually because these families are too small or there is a late onset of the disease. In addition, obtaining a thorough family history is difficult in clinical practice and patients may have limited knowledge of their family history . In 1997, the Bethesda Guidelines were published to select patients whose tumours should be analysed for molecular features associated with LS, i.e. microsatellite instability (MSI), to identify potential mutation carriers ( ) . The Bethesda Guidelines have been revised in 2004 to make them more suitable for use in clinical practice, and are not only based on family history, but also on age at cancer diagnosis, number of LS-associated carcinomas and certain histological tumour features ( ) . These histological tumour features, associated with LS, include the presence of tumour-infiltrating lymphocytes, a Crohn’s-like lymphocytic reaction, mucinous or signet-ring cell differentiation and a medullary or undifferentiated and solid growth pattern. The additional value of these pathology characteristics in the selection of tumours for further testing for LS has been described previously . However, these histological features are related to both microsatellite unstable sporadic tumours as well as LS tumours. Therefore the ability to identify LS patients alone on the basis of these tumour features is limited . In addition, the assessment of these histological tumour features indicating MSI is poorly implemented in daily clinical practice . At present, the most widely accepted recommendation for the identification of patients with LS is based on the combination of these revised Bethesda Guidelines and MSI testing. This combination has proven to be an effective and efficient strategy for LS identification, with a sensitivity for detection of mutation carriers reported from 72% up to 100%[ – ], and a specificity ranging from 77% to 98%[ , , ]. However, these criteria have been criticized because of the use of broad and complex variables, and families with MSH6 and possibly also PMS2 mutations remain undetected . It has also been shown in several studies that these criteria are poorly implemented in clinical practice [ , – ]. In 2005, a Dutch group therefore developed a new strategy for the detection of LS . In this strategy the pathologist selects newly diagnosed patients fulfilling one of the following criteria for MSI analysis; (1) CRC before the age of 50 years, (2) Two LS-associated tumours, including synchronous or metachronous CRCs or LS-associated tumours or (3) adenoma before the age of 40 years. These criteria, known as MIPA criteria, simplify the Bethesda guidelines in such a way that pathologists, without knowledge of family history, can easily apply them. These criteria were found to be effective, efficient and feasible in daily practice . In The Netherlands, the diagnosis of LS is currently based on a nationwide guideline for MSI analysis ( ), that was introduced in January 2008 ( http://www.oncoline.nl ). This guideline resembles the MIPA criteria. MSI analysis (and immunohistochemistry of the MMR proteins) is requested by the pathologist in patients newly diagnosed with CRC or endometrial carcinoma before the age of 50 years, or patients with two LS-associated tumours (including synchronous and metachronous CRCs or LS-associated tumours) before the age of 70 years. Presence of multiple LS-associated cancers is registered in PALGA, the nationwide network and registry of histopathology and cytopathology in The Netherlands ( http://www.palga.nl ). For MSI analysis based on a positive family history, referral to a clinical geneticist is indicated. In those cases MSI analysis will generally be performed when the (revised) Bethesda or Amsterdam Criteria are met and if archival paraffin-embedded tumour tissue can be obtained. Since clinical criteria do not quantify the likelihood of being a mutation carrier, refined algorithms and multivariable models have been developed to make a quantitative estimation of the risk of carrying a germline MMR-gene mutation, without the requirement of tissue . Several models that combine personal and familial data have been developed, such as the Leiden model, the Edinburgh Model, Premm1,2 and the MMR-pro model [ – ]. One of the advantages of the quantitative models is that the threshold for sensitivity or specificity of the model can be adjusted based upon the clinical situation. However, the role for these models in daily clinical practice remains to be determined. At present a study (called LIMO and coordinated by the Erasmus MC, Rotterdam, The Netherlands) is performed to determine whether further improvement of LS diagnostics can be obtained by the performance of MSI analysis in CRC patients up to the age of 70 years. MSI analysis is performed in a prospective consecutive series of 1000 newly diagnosed CRC patients ≤70 years, and the results are expected in 2010.
LS is caused by a germline mutation in one of the MMR genes, most commonly MLH1 and MSH2 (±90%) , but also MSH6 and PMS2 [ , , ]. LS patients are born with a germline mutation in one of these MMR genes, and acquire inactivation of the second wild-type allele in their tumours, fulfilling Knudson’s two hit hypothesis for inactivation of tumour suppressor genes . Because of the high chance of inactivation of the homologous wild-type allele during life, LS transmits phenotypically in an autosomal dominant fashion. The somatic inactivation of the corresponding wild-type allele occurs almost exclusively by small mutations or (partial) gene loss, and bi-allelic inactivation then leads to complete abolition of the protein function. This results in a defective DNA MMR system, since the protein products of the MMR genes are involved in correction of nucleotide base mismatches and small insertions or deletions that arise during DNA replication [ – ]. The mechanism of MMR has been largely elucidated ( ). MSH2 (mutS homologue 2) forms a heterodimer with MSH6 (mutS homologue 6), sliding along the DNA as a clamp to identify single nucleotide mispairs and small insertions and deletions . MLH1 (mutL homologue 1) dimerizes with PMS2 (post-meiotic segregation 2) and binds to the MSH2-MSH6 complex. Together this group of four proteins recruits an exonuclease to perform the DNA repair . If any of the four major proteins (MSH2, MLH1, MSH6, or PMS2) is functionally inactive, mismatches are not repaired. A defective DNA MMR system increases the mutation rate and makes the cell vulnerable to mutations in genes controlling cell growth (including tumour suppressor genes and oncogenes), resulting in an elevated cancer risk. In case of a defective MMR system, mutations occur frequently in small (usually mononucleotide or dinucleotide) repetitive DNA sequences, known as microsatellites . In MMR deficient tumour cells the number of nucleotide repeat units of microsatellites can deviate from the corresponding normal DNA; the number of repeats is usually decreased, but occasionally increased ( ). This variation in repeat units and thus length or size of microsatellites is called MSI. MSI (formerly referred to as MIN, another abbreviation for MSI, or replication error abbreviated as RER) is the molecular hallmark of LS since approximately 95% of all LS-associated cancers show MSI [ – ]. MSI thereby serves as a reliable phenotypic marker of MMR deficiency which is easy to evaluate in order to pre-select patients for germline mutation analysis of the MMR genes. Despite the fact that tumour MSI is a reliable marker for MMR deficiency, it is a marker for LS with limited specificity since 15% of sporadic CRCs also demonstrate a MSI phenotype. This is mainly caused by somatic hypermethylation of the MLH1- gene promoter . DNA methylation is an epigenetic DNA modification that specifically targets cytosine residue at CpG dinucleotides. Genomic regions that contain a high frequency of CpG dinucleotides are called CpG islands, present in the promoters of about 40% of all human genes, including the MLH1 -gene . Hypermethylation of CpG islands in the MLH1 promoter causes severe inhibition of gene transcription thereby functionally mimicking an inactivating gene mutation. If both copies of the gene are inactivated (mainly by bi-allelic hypermethylation), the DNA MMR function of MLH1 is lost. This leads to microsatellite unstable cancers, especially in older patients . MLH1 deficient microsatellite unstable tumours can be assessed for MLH1 hypermethylation to distinguish sporadic CRCs from LS-related cancers. Theoretically, sporadic hypermethylation of the other MMR genes is possible but has not yet been demonstrated. Specific activating mutations in the BRAF oncogene, usually V600E missense mutations (formerly reported as V599E), can be detected in 40–87% of all sporadic microsatellite unstable tumours. An oncogenic BRAF mutation has been described only once in numerous investigated LS tumours [ – , ]. These results indicate that BRAF mutations are closely correlated with MLH1 methylation in sporadic CRCs [ – , , ]. Therefore, BRAF mutation status can be used to identify sporadic microsatellite unstable tumours, although it has been demonstrated that determination of hypermethylation of the MLH1 gene promoter is more sensitive to detect sporadic MSI tumours . In addition to sporadic forms of MLH1 promoter hypermethylation, germline epimutations of MLH1 (soma-wide mono-allelic hypermethylation of the gene promoter) have also been reported [ – ]. Germline MLH1 hypermethylation, often showing some degree of mosaicism, is functionally equivalent to an inactivating mutation and produces a clinical phenotype that resembles LS. Inheritance of epimutations is weak as the methylation can be cleared on passage through the germline (germline MLH1 promoter epimutations are reversible during meiosis) and so can display non-Mendelian inheritance. Heritability of epimutations might also be explained by the inheritance of an unknown predisposition to epimutations, rather than the inheritance of the epimutation itself . Although very rare, germline MLH1 promoter methylation should be considered in younger individuals or individuals with multiple LS-associated tumours without a family history who present with an MSI tumour showing loss of MLH1 expression . Besides germline MLH1 hypermethylation, a new mechanism of germline MSH2 hypermethylation has recently been discovered . Ligtenberg et al . showed that a germline deletion of the last two exons of TACSTD1 , the gene just upstream of MSH2 encoding epithelial cell adhesion molecule (EpCAM), leads to inactivation of the MSH2 gene by promoter hypermethylation exclusively in tissues expressing EpCAM (mosaic pattern). This mechanism may cause LS in patients with MSH2-deficient microsatellite unstable tumours with an undetectable MSH2 germline mutation. Identification of these cases is possible by the determination of the methylation status of the MSH2 gene promoter in the tumour and in EpCAM expressing normal tissues ( e.g. normal colorectal mucosa). In addition, evidence for the presence of MSH2 methylation can be obtained by detection of deletions in the 3′ end of the TACSTD1 gene.
The molecular diagnostics of LS usually starts with MSI analysis. MSI analysis is traditionally performed with a panel of five microsatellite markers proposed by a NCI (National Cancer Institute) sponsored consensus conference, also known as the Bethesda panel . With these markers, microsatellites in tumour DNA are compared to microsatellites in corresponding DNA from normal tissue. Tumours with more than one unstable marker (or ≥40% of markers) are categorized as having a high degree of MSI (MSI-H), which is suspect for LS or epigenetic MLH1 silencing [ – ]. Those with one unstable marker (20–40% of markers) are categorized as having a low degree of MSI (MSI-L) and tumours with no instability (≤20%) are categorized as being microsatellite stable (MSS), seen in sporadic carcinomas . Although there are no clear differences in clinical or pathological features between MSI-L and MSS tumours, it has been speculated that MSI-L tumours comprise an independent phenotype . However, there is nowadays no role for separating MSI-L from MSS tumours in the diagnostic work-up. Furthermore, MSI testing seems not only important for recognition of LS, but may in the future also improve the clinical management of CRC patients. This is because patients with microsatellite unstable CRCs appear to have a better prognosis than patients with MSS tumours [ – ] and they do not seem to benefit from adjuvant chemotherapy with 5-fluorouracil [ – ]. The Bethesda panel, comprising two mononucleotide repeats (BAT-25 and BAT-26) and three dinucleotide repeats (D2S123, D5S346 and D17S250) , does have some limitations, mainly caused by the dinucleotide repeats. These repeats are highly polymorphic and less sensitive and specific in the identification of MSI-H tumours than mononucleotide repeats. Their use in MSI screening requires analysis of corresponding germline DNA and the interpretation of size alterations in dinucleotide repeats is more difficult due to stutter, a PCR artefact. Their use can result in misclassification of MSI-L tumours as MSI-H . Furthermore, MSH6 mutation carriers may develop tumours (predominantly endometrial cancer) without alteration in these dinucleotide repeats leading to false MSI-L or MSS results . The limitations of the Bethesda panel have lead to the development of a pentaplex panel, which comprises five quasi-monomorphic mononucleotide repeats (see below). This panel shows less variation in size among different ethnic populations and has been shown to be superior to the Bethesda panel for the detection of MSI-H tumours . Because the pentaplex analysis is carried out in a single multiplex PCR, this method is simple to use and is free of errors due to mixing samples. To gain insight into what gene might be affected in patients with MSI-H tumours, MLH1, MSH2, MSH6 and PMS2 protein expression can be assessed by immunohistochemistry. The combination of MSI analysis and MMR protein immunostaining is generally considered as the superior strategy for the identification of suspected LS patients . Absence of MMR protein nuclear staining within the tumour cells can be compared to nuclear staining in the normal cells within the same tumour specimen (and same histological section). The latter then serve as internal positive control. Due to their heterodimeric nature, different immunohistochemical staining patterns of the MMR proteins can be observed ( ). Loss of MLH1 protein due to MLH1 gene mutation or promoter hypermethylation is usually accompanied with absence of PMS2 in the tumour ( ). Similarly, absence of MSH2 due to MSH2 mutations results in absence of MSH6 ( ), since MSH6 and PMS2 will disintegrate without their obligatory partners MSH2 and MLH1, respectively. A mutation in either PMS2 or MSH6 does not lead to loss of MLH1 and MSH2 protein, respectively ( ), because of the formation of other heterodimers than MLH1-PMS2 and MSH2-MSH6. MLH1 can for instance dimerize with either MLH3 or PMS1 and MSH2 can also bind to MSH3 . Due to the binding of MLH1 and MSH2 to other MMR proteins in the absence of PMS2 or MSH6, there is no concurrent loss of MLH1 and MSH2 . To date no bona fide involvement of PMS1 , MLH3 or MSH3 (inactivating mutations) has been demonstrated in LS. In general, absent MSH2, MSH6 or PMS2 expression in tumour cells with present staining in normal cells is suspect for underlying LS and calls for germline testing. Absent MLH1 (and PMS2) expression can indicate either LS or a sporadic tumour with epigenetically silenced MLH1 . If epigenetic MLH1 silencing has been excluded by the analysis of MLH1 hypermethylation and/or BRAF mutation analysis, MLH1 germline mutation testing is indicated. Furthermore, it might theoretically be possible that immunohistochemical absence of PMS2 or MSH6 without concomitant absence of MLH1 or MSH2, respectively, is due to mutations in MLH1 or MSH2 . These mutations then will not lead to decreased MLH1 and MSH2 immunostaining, while binding to and expression of PMS2 and MSH6, respectively, is abrogated. Therefore, absent PMS2 or MSH6 immunostaining without detectable mutations in the PMS2 or MSH2 gene, asks for mutation analysis of MLH1 or MSH2 respectively. At our institution, MSI analysis and immunohistochemistry are requested either by the pathologist when patients fulfil the criteria as depicted in , or by the clinical geneticist (or clinician) when individuals meet the Bethesda Guidelines. The flowchart of the molecular diagnostics of LS in The Netherlands is depicted in . All these different molecular diagnostic procedures will be described in more detail in the next paragraphs.
MSI analysis From routine formalin fixed and paraffin embedded (FFPE) tumour tissue specimens 10 to 20 consecutive sections of 4 μm are cut and routinely glued on microscope glass slides. The number of sections is determined by the size of the tissue fragments that need to be isolated for DNA analysis. All sections are deparaffinized and the first and the last section of the series are routine Mayer haematoxylin and eosin stained. These sections are used as reference for the isolated tissue parts. The intermediate sections are stained in haematoxylin and rinsed in distilled water. The indicated tumour and normal tissue fragments are then manually scraped in distilled water from the glass slide and transferred to Eppendorf vials. From the remaining tissue fragments on the glass slides, routine microscopic preparations are made after additional staining with eosin. With these preparations the isolated tissue fragments can be verified ( ). Occasionally, when large and easy recognizable tissue fragments can be isolated, scraping is performed from paraffin sections on glass slides without deparaffinization. Furthermore, when the tissue fragments to be isolated are too small for manual isolation, laser microdissection is used on haematoxylin and eosin stained sections glued on membrane containing glass slides (PALM Membrane Slides, P.A.L.M. Microlaser Technologies AG, Bernried, Germany) ( ). Although MSI can be reliably detected even when DNA is isolated from a tissue fragment composed of only 10% neoplastic cells (unpublished data), tumour DNA is isolated preferably from a tissue fragment with a high percentage (>70%) of tumour cells. DNA isolated from tissue with a high percentage of tumour cells can also be used for reliable additional investigations ( BRAF mutation and MLH1 hypermethylation). In the case of an adenoma the fragment with the highest grade of dysplasia should be used for DNA isolation. For isolation of normal DNA a tissue fragment composed of normal cells, preferably from the normal epithelial counterpart of the tumour ( e.g. normal colorectal or normal endometrial mucosa), is used to circumvent heterogeneity problems that can be caused by mosaicism ( e.g. mosaic MLH1 promoter germline hypermethylation or MSH2 promoter hypermethylation only in Epcam expressing cells). However, since these mosaic phenomena are very rare, other normal tissue fragments ( e.g. a tumour-negative lymph node) can be used for normal DNA isolation, in cases where there is no, or not easy to isolate, normal mucosa available. From the microdissected FFPE tissue fragments DNA is extracted by addition of 100 to 200 μl (when very small tissue fragments are used digestion is performed in a volume down to 25 μl) lysis buffer (10 mM Tris/HCL pH 8.0, 1 mM ethylenediaminetetraacetic acid [EDTA] pH 8.0, 0.01% Tween 20) containing 2 mg/ml proteinase K and 5% Chelex 100 resin. Following overnight incubation at 56°C, proteinase K is inactivated at 100°C for 10 min. Next, dissolved DNA is separated from cell debris by centrifugation at maximum speed in a microcentrifuge for 5 min. The DNA-containing supernatant is carefully pipetted from the Chelex resin-containing pellet (Chelex resin inhibits polymerase activity) and transferred to another Eppendorf vial. In case un-deparaffinized sections were used for DNA isolation, the DNA-containing supernatant is collected by carefully poking the pipette tip through the solidified paraffin layer on top of the supernatant. Different methods for MSI analysis are currently available. In our laboratory we use the MSI analysis system of Promega (Promega, Madison, WI, USA) ; a fluorescent multiplex PCR-based assay in which the PCR products are separated by capillary electrophoresis using an ABI PRISM 3130 xl genetic analyser (Applied Biosystems, Foster City, CA, USA). PCR is performed according to the kit instructions in a total volume of 10 μl including 2 μl of an about 80-fold dilution of the isolated DNA solution. The output data are analysed with GeneMarker software (SoftGenetics, State College, PA, USA) to determine MSI status of tumour samples. This system includes fluorescently labelled primers for co-amplification of five quasi-monomorphic mononucleotide repeat markers BAT-25, BAT-26, NR-21, NR-24 and MONO-27. In addition, 2 pentanucleotide markers (Penta C and Penta D) characterized by a high level of polymorphism have been added to provide information on possible sample mix-up or contamination. Because of the low size variation in the population of the selected mononucleotide markers this analysis allows, in most cases, that only tumour DNA is investigated for MSI. DNA from a MSS cell line suffices as normal DNA reference. If inconclusive results are obtained, for example due to the infrequent occurrence of bi-allelic variation or borderline shifts of the marker peaks, the assay is repeated with both tumour and patient matched normal DNA. Furthermore, additional mononucleotide MSI markers such as BAT-40 can be used in the case of a MSS tumour with a strong clinical suspicion for underlying LS. Results of MSS and microsatellite unstable tumours are shown in . Immunohistochemistry Our method of immunohistochemistry was described in detail previously . Briefly, FFPE tissue sections (4 μm) are dewaxed, and antigen retrieval is performed in 10 mM Tris-EDTA buffer, (pH 9.0) in a microwave oven for 45 min. at 100°C. Primary antibodies anti-MLH1 (Pharmingen BD, Alphen aan den Rijn, The Netherlands; clone G168–728; dilution, 1:20), anti-MSH2 (Pharmingen BD; clone G219–1129; dilution, 1:300), anti-MSH6 (Pharmingen BD; clone 44; dilution, 1:100) and anti-PMS2 (Pharmingen BD; clone A16–4; dilution, 1:50) are applied for 1 hr at room temperature. After washing, immunoreactivity is visualized with the Envision kit (Dako, Glostrup, Denmark). Subsequently, the sections are counterstained with Mayer haematoxylin and evaluated under a light microscope ( ). MLH1 promoter hypermethylation assay In case of absent MLH1 expression in tumour cells, the methylation status of the MLH1 promoter can be determined by different methods such as methylation-specific PCR and methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) . MS-MLPA is performed with the SALSA MS-MLPA Kit ME011-A1 for MMR genes (MRC-Holland, Amsterdam, The Netherlands). The analysis is performed according to the kit instructions with 3 μl undiluted DNA solution as input. The assay takes advantage of methylation-sensitive endonuclease HhaI, which only cleaves unmethylated DNA fragments. The MS-MLPA kit contains 8 control probe sequences and 21 methylation-sensitive probes of which 5 recognize CpG dinucleotides within the MLH1 promoter. The methylation-sensitive probes contain a restriction site for Hha I. Comparison of a HhaI -digested DNA sample (yielding only signal of methylated DNA) to its undigested counterpart (yielding signal of both methylated and unmethylated DNA) provides insight into the degree of methylation. Details of the MS-MLPA protocol are freely available on the website of the manufacturer ( http://www.mrc-holland.com ). Basically, tumour DNA is hybridized to the probe mix. After hybridization, half of the sample is subjected to a ligation step joining both adjacently hybridized fragments of a probe set, whereas the other half of the sample is subjected to both ligation and HhaI digestion, leaving only methylated sequences intact. Subsequent PCR amplification exponentially amplifies all ligated, but undigested, probes. The signal generated with the part of the sample that has undergone both ligation and digestion represents the amount of methylated DNA present in the tumour. For fragment analysis, PCR products are separated by capillary gel electrophoresis using an ABI PRISM 3130 xl genetic analyser (Applied Biosystems) and quantified with GeneMarker software version 1.7 (SoftGenetics). The MS-MLPA results are normalized by dividing the peak height of each MLH1 probe signal by the mean peak height of the eight control fragments obtained with the same sample ( ). The degree of methylation for individual MLH1 probes can be assessed by dividing normalized values of each MLH1 probe within digested DNA samples by normalized values of the probe in corresponding undigested samples. The MS-MLPA assay is performed with both tumour and normal mucosal DNA to detect possible germline MLH1 promoter hypermethylation. BRAF mutation analysis BRAF alterations of mutational hotspot codon V600 are determined by bi-directional cycle sequencing of PCR-amplified fragments. PCR amplification is performed by M13-tailed forward primer 5′-TGT AAA ACG ACG GCC AGT AAA CTC TTC ATA ATG CTT GCT CTG -3′ and M13-tailed reverse primer 5′-CAG GAA ACA GCT ATG ACC GGC CAA AAA TTT AAT CAG TGG AA-3′. PCR products are generated in a 15 μl reaction mixture including 1.0 μl undiluted DNA solution, 10 μmol of each primer, 25 mM MgCl 2 , 10 mM dNTPs and 1 U Taq polymerase (Promega). The PCR reaction is performed with a thermocycler (Biometra, Göttingen, Germany) with an initial denaturating step (95°C) for 3 min., followed by 35 cycles consisting of denaturation (95°C) for 30 sec., annealing (60°C) for 45 sec. and extension (72°C) for 45 sec. After the final cycle, an extension period of 10 min. at 72°C is performed. The PCR products are sequenced with M13 forward primer 5′-TGT AAA ACG ACG GCC AGT-3′ and M13 reverse primer 5′-CAG GAA ACA GCT ATG ACC-3′ using the ABI PRISM BigDye Terminator v3.1 kit (Applied Biosystems). Sequence analyses are performed on an ABI PRISM 3130 xl genetic analyser (Applied Biosystems). Samples are analysed using Mutation Surveyor software (SoftGenetics) and are compared with the public sequence of GenBank (NT_007914). Examples of BRAF mutation analysis results are shown in .
From routine formalin fixed and paraffin embedded (FFPE) tumour tissue specimens 10 to 20 consecutive sections of 4 μm are cut and routinely glued on microscope glass slides. The number of sections is determined by the size of the tissue fragments that need to be isolated for DNA analysis. All sections are deparaffinized and the first and the last section of the series are routine Mayer haematoxylin and eosin stained. These sections are used as reference for the isolated tissue parts. The intermediate sections are stained in haematoxylin and rinsed in distilled water. The indicated tumour and normal tissue fragments are then manually scraped in distilled water from the glass slide and transferred to Eppendorf vials. From the remaining tissue fragments on the glass slides, routine microscopic preparations are made after additional staining with eosin. With these preparations the isolated tissue fragments can be verified ( ). Occasionally, when large and easy recognizable tissue fragments can be isolated, scraping is performed from paraffin sections on glass slides without deparaffinization. Furthermore, when the tissue fragments to be isolated are too small for manual isolation, laser microdissection is used on haematoxylin and eosin stained sections glued on membrane containing glass slides (PALM Membrane Slides, P.A.L.M. Microlaser Technologies AG, Bernried, Germany) ( ). Although MSI can be reliably detected even when DNA is isolated from a tissue fragment composed of only 10% neoplastic cells (unpublished data), tumour DNA is isolated preferably from a tissue fragment with a high percentage (>70%) of tumour cells. DNA isolated from tissue with a high percentage of tumour cells can also be used for reliable additional investigations ( BRAF mutation and MLH1 hypermethylation). In the case of an adenoma the fragment with the highest grade of dysplasia should be used for DNA isolation. For isolation of normal DNA a tissue fragment composed of normal cells, preferably from the normal epithelial counterpart of the tumour ( e.g. normal colorectal or normal endometrial mucosa), is used to circumvent heterogeneity problems that can be caused by mosaicism ( e.g. mosaic MLH1 promoter germline hypermethylation or MSH2 promoter hypermethylation only in Epcam expressing cells). However, since these mosaic phenomena are very rare, other normal tissue fragments ( e.g. a tumour-negative lymph node) can be used for normal DNA isolation, in cases where there is no, or not easy to isolate, normal mucosa available. From the microdissected FFPE tissue fragments DNA is extracted by addition of 100 to 200 μl (when very small tissue fragments are used digestion is performed in a volume down to 25 μl) lysis buffer (10 mM Tris/HCL pH 8.0, 1 mM ethylenediaminetetraacetic acid [EDTA] pH 8.0, 0.01% Tween 20) containing 2 mg/ml proteinase K and 5% Chelex 100 resin. Following overnight incubation at 56°C, proteinase K is inactivated at 100°C for 10 min. Next, dissolved DNA is separated from cell debris by centrifugation at maximum speed in a microcentrifuge for 5 min. The DNA-containing supernatant is carefully pipetted from the Chelex resin-containing pellet (Chelex resin inhibits polymerase activity) and transferred to another Eppendorf vial. In case un-deparaffinized sections were used for DNA isolation, the DNA-containing supernatant is collected by carefully poking the pipette tip through the solidified paraffin layer on top of the supernatant. Different methods for MSI analysis are currently available. In our laboratory we use the MSI analysis system of Promega (Promega, Madison, WI, USA) ; a fluorescent multiplex PCR-based assay in which the PCR products are separated by capillary electrophoresis using an ABI PRISM 3130 xl genetic analyser (Applied Biosystems, Foster City, CA, USA). PCR is performed according to the kit instructions in a total volume of 10 μl including 2 μl of an about 80-fold dilution of the isolated DNA solution. The output data are analysed with GeneMarker software (SoftGenetics, State College, PA, USA) to determine MSI status of tumour samples. This system includes fluorescently labelled primers for co-amplification of five quasi-monomorphic mononucleotide repeat markers BAT-25, BAT-26, NR-21, NR-24 and MONO-27. In addition, 2 pentanucleotide markers (Penta C and Penta D) characterized by a high level of polymorphism have been added to provide information on possible sample mix-up or contamination. Because of the low size variation in the population of the selected mononucleotide markers this analysis allows, in most cases, that only tumour DNA is investigated for MSI. DNA from a MSS cell line suffices as normal DNA reference. If inconclusive results are obtained, for example due to the infrequent occurrence of bi-allelic variation or borderline shifts of the marker peaks, the assay is repeated with both tumour and patient matched normal DNA. Furthermore, additional mononucleotide MSI markers such as BAT-40 can be used in the case of a MSS tumour with a strong clinical suspicion for underlying LS. Results of MSS and microsatellite unstable tumours are shown in .
Our method of immunohistochemistry was described in detail previously . Briefly, FFPE tissue sections (4 μm) are dewaxed, and antigen retrieval is performed in 10 mM Tris-EDTA buffer, (pH 9.0) in a microwave oven for 45 min. at 100°C. Primary antibodies anti-MLH1 (Pharmingen BD, Alphen aan den Rijn, The Netherlands; clone G168–728; dilution, 1:20), anti-MSH2 (Pharmingen BD; clone G219–1129; dilution, 1:300), anti-MSH6 (Pharmingen BD; clone 44; dilution, 1:100) and anti-PMS2 (Pharmingen BD; clone A16–4; dilution, 1:50) are applied for 1 hr at room temperature. After washing, immunoreactivity is visualized with the Envision kit (Dako, Glostrup, Denmark). Subsequently, the sections are counterstained with Mayer haematoxylin and evaluated under a light microscope ( ).
promoter hypermethylation assay In case of absent MLH1 expression in tumour cells, the methylation status of the MLH1 promoter can be determined by different methods such as methylation-specific PCR and methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) . MS-MLPA is performed with the SALSA MS-MLPA Kit ME011-A1 for MMR genes (MRC-Holland, Amsterdam, The Netherlands). The analysis is performed according to the kit instructions with 3 μl undiluted DNA solution as input. The assay takes advantage of methylation-sensitive endonuclease HhaI, which only cleaves unmethylated DNA fragments. The MS-MLPA kit contains 8 control probe sequences and 21 methylation-sensitive probes of which 5 recognize CpG dinucleotides within the MLH1 promoter. The methylation-sensitive probes contain a restriction site for Hha I. Comparison of a HhaI -digested DNA sample (yielding only signal of methylated DNA) to its undigested counterpart (yielding signal of both methylated and unmethylated DNA) provides insight into the degree of methylation. Details of the MS-MLPA protocol are freely available on the website of the manufacturer ( http://www.mrc-holland.com ). Basically, tumour DNA is hybridized to the probe mix. After hybridization, half of the sample is subjected to a ligation step joining both adjacently hybridized fragments of a probe set, whereas the other half of the sample is subjected to both ligation and HhaI digestion, leaving only methylated sequences intact. Subsequent PCR amplification exponentially amplifies all ligated, but undigested, probes. The signal generated with the part of the sample that has undergone both ligation and digestion represents the amount of methylated DNA present in the tumour. For fragment analysis, PCR products are separated by capillary gel electrophoresis using an ABI PRISM 3130 xl genetic analyser (Applied Biosystems) and quantified with GeneMarker software version 1.7 (SoftGenetics). The MS-MLPA results are normalized by dividing the peak height of each MLH1 probe signal by the mean peak height of the eight control fragments obtained with the same sample ( ). The degree of methylation for individual MLH1 probes can be assessed by dividing normalized values of each MLH1 probe within digested DNA samples by normalized values of the probe in corresponding undigested samples. The MS-MLPA assay is performed with both tumour and normal mucosal DNA to detect possible germline MLH1 promoter hypermethylation.
mutation analysis BRAF alterations of mutational hotspot codon V600 are determined by bi-directional cycle sequencing of PCR-amplified fragments. PCR amplification is performed by M13-tailed forward primer 5′-TGT AAA ACG ACG GCC AGT AAA CTC TTC ATA ATG CTT GCT CTG -3′ and M13-tailed reverse primer 5′-CAG GAA ACA GCT ATG ACC GGC CAA AAA TTT AAT CAG TGG AA-3′. PCR products are generated in a 15 μl reaction mixture including 1.0 μl undiluted DNA solution, 10 μmol of each primer, 25 mM MgCl 2 , 10 mM dNTPs and 1 U Taq polymerase (Promega). The PCR reaction is performed with a thermocycler (Biometra, Göttingen, Germany) with an initial denaturating step (95°C) for 3 min., followed by 35 cycles consisting of denaturation (95°C) for 30 sec., annealing (60°C) for 45 sec. and extension (72°C) for 45 sec. After the final cycle, an extension period of 10 min. at 72°C is performed. The PCR products are sequenced with M13 forward primer 5′-TGT AAA ACG ACG GCC AGT-3′ and M13 reverse primer 5′-CAG GAA ACA GCT ATG ACC-3′ using the ABI PRISM BigDye Terminator v3.1 kit (Applied Biosystems). Sequence analyses are performed on an ABI PRISM 3130 xl genetic analyser (Applied Biosystems). Samples are analysed using Mutation Surveyor software (SoftGenetics) and are compared with the public sequence of GenBank (NT_007914). Examples of BRAF mutation analysis results are shown in .
Over the last decade, the diagnostics of LS have improved considerably. Nevertheless, there still remain some limitations that need to be addressed. It has to be taken into account that the described procedures provide information on the chance that a certain tumour arose in the context of LS and are not diagnostic for LS in an absolute sense. The false negative rate of MSI analysis is very low (<5%) but cannot be completely ruled out. MSI can be very subtle or escape detection particularly in low grade lesions as adenomas, in endometrial carcinomas ( , panel N3/T3) and in samples with a low percentage of neoplastic cells . These false-negative results may lead to the exclusion of LS patients (and affected family members) from necessary surveillance programs and subsequent failure to detect (secondary) cancers in an early stage. In addition, although rare, sporadic MSS tumours can occur in LS patients and MSI analysis then fails to indicate LS. To exclude false-negative MSI results as much as possible it is necessary to isolate DNA from a tissue fragment with a high percentage of tumour cells. For this, laser microdissection might be preferable instead of manual microdissection. However, laser microdissection is a time consuming and labour-intensive procedure to obtain sufficient tissue fragments for DNA isolation . In general, it is recommended to refer patients with a high clinical or familial suspicion of LS to a clinical genetics department, irrespective of the MSI status. In addition, other hereditary CRC syndromes such as attenuated familial adenomatous polyposis, MYH-associated polyposis, Cowden syndrome or Peutz-Jeghers syndrome might need to be excluded. Although the assessment of MMR-protein expression by immunohistochemistry is a fast and simple procedure, the interpretation of the results can be difficult. Interpretation may be impeded by absence or low intensity of the nuclear staining in tumour and normal tissue due to fixation artefacts, especially in old archival specimens . In case of missense mutations, the inactive protein may be (partly) expressed and detectable by immunohistochemistry. The interpretation is also hampered by some degree of observer-variation and the value of immunohistochemistry partially depends on the experience of the pathologist . For these reasons immunohistochemistry cannot replace MSI testing to detect LS, and this underlines the importance of the combined application of MSI analysis and MMR protein immunostaining to detect LS. In the evaluation of MLH1 promoter methylation, it is important to study the correct promoter regions since MLH1 expression only correlates with methylation of the proximal promoter regions (mainly region C, but also region D) [ , , ]. Nevertheless, there are still studies published in which the distal MLH1 promoter regions were analysed, which are not or only poorly associated with gene silencing. Moreover, epigenetic inactivation of the second normal MLH1 allele by promoter methylation (second hit), may also play a role in individuals with LS , and it should be realized that the detection of MLH1 promoter methylation can not completely rule out LS. In the case of a strong clinical suspicion, referral to a clinical geneticist is indicated. The exact frequencies of MLH1 promoter methylation in LS patients (either as a second inactivating event, or as a heritable germline epimutation), are unknown. It has been reported that in tumours from MLH1 mutation carriers, the wild-type allele is hypermethylated in 0–46% of the tumours [ , , , , – ]. However, only one study evaluated the proximal promoter region (region D) associated with gene silencing in 55 CRCs and endometrial cancers of MLH1 germline mutation carriers . Hypermethylation was seen in 7.3% of all tumours (16% of CRCs). In the other studies, promoter regions not associated with MLH1 silencing were investigated ( i.e. the distal promoter regions) [ , , , , – ]. There are some other points of concern in the molecular diagnostics of LS. First, the value of MSI testing and immunohistochemistry in other LS-related tumours than CRC is largely unknown . In endometrial tumours, the second most common malignancy in LS, MSI can escape detection by the occurrence of only subtle shifts in the size of the markers . Therefore, MSI analysis in endometrial cancers is performed with patient matched normal DNA as the reference, and molecular pre-screening has been found feasible [ , , ] ( , panel N3/T3). Furthermore, the quality of DNA extracted from FFPE tumours can occasionally be poor and therefore not suitable for MSI analysis . And last but not least, some individuals might have ethical objections against MSI testing or immunohistochemistry, since the diagnosis of LS can be very likely after the described molecular examinations, which might have negative social consequences and raise concerns e.g. about insurance risks. Therefore, we believe that the clinician should inform the patient about the fact that the pathological examinations may not only give information about the nature of the tumour, but may also indicate an elevated risk of an underlying hereditary disorder.
Different diagnostic strategies have been developed for LS as discussed in this review and the optimal method for the identification of LS patients is still debated and in flux. In the previous paragraphs the molecular diagnostic approach of LS in The Netherlands (Erasmus MC, University Medical Center, Rotterdam) has been described ( ). This approach combines MSI analysis and MMR protein immunostaining and is in our opinion a productive way of pre-selecting patients for germline mutation analysis, with a central role for the pathologist. Nevertheless, if the clinical suspicion for LS is very high, e.g. because of a positive family history for LS-associated cancers or a LS-associated malignancy diagnosed at a very young age, referral to a clinical geneticist is strongly recommended, even in the case of tumours without MMR deficiency ( i.e. MSS).
|
Smoothened loss is a characteristic of neuroendocrine prostate cancer | 010594f7-514c-419a-907e-d4bd866da811 | 8251989 | Histology[mh] | INTRODUCTION Prostate cancer is one of the most common and deadly cancers in males worldwide, and more than 34 thousand men are estimated to die from prostate cancer in 2021. Antiandrogen therapies are standardized approaches to the treatment of metastatic prostate cancers. However, after months or years of remission, nearly all patients relapse, and these cancers are termed castration‐resistant prostate cancer (CRPC), the majority of which still depend on androgen receptor signaling (AR) for survival. Currently, with the widespread clinical usage of potent AR pathway inhibitors (APIs), the incidence of treatment‐emergent neuroendocrine prostate cancer (NEPC) has paradoxically increased, with up to 20% of advanced CRPCs ultimately developing small‐cell neuroendocrine pathological features. NEPC tumors are AR signaling independent and exhibit certain neuroendocrine signatures, making them resistant to APIs, and the platinum‐based chemotherapeutic regimen is effective for only a short period ; thus, NEPC is the most lethal subtype of CRPC. Tremendous progress has been made on understanding NEPC in the last decade. Genomic loss of TP53 and RB1 is more prevalent in NEPC (50%–75%) than in adenocarcinoma (5%–15%) and facilitates the activation of pluripotent networks mediated by derepression of SOX2 and EZH2. , Genomic aberrations cooperate with epigenetic modifiers, such as REST, and neurolineage pioneering transcription factors, such as BRN2, ONECUT2, SRRM4, and MYCN drive the emergence of NEPC. Loss of luminal lineage transcription factors, such as AR and FOXA1 breaches the barriers of prostate adenocarcinoma (AdPC) reprogramming. However, the evolution of NEPC is a complex process and a spectrum of intermediate differentiation states constitute a continuum between the AdPC and NEPC phenotypes; therefore, the underlying molecular mechanisms remain largely elusive, and druggable targets for clinical application need to be identified. Smoothened (SMO) is a class Frizzled (Class F) G protein‐coupled receptor that is a key signal transducer of the Hedgehog (Hh) pathway. In the presence of secreted Hh ligands (e.g., SHH, DHH, and IHH), SMO is released by an inhibitory receptor, PTCH1, which leads to the activation of glioma‐associated oncogene (Gli) transcription factors (TFs), namely, Gli1, Gli2, and Gli3. The Hh signaling pathway plays fundamental morphogenic and mitogenic roles in the process of embryonic development and a vital role in the regulation of cell fate and ductal morphogenesis in the prostate; however, at the completion of embryogenesis, Hh signaling becomes quiescent. , Hyperactive Hh caused by recurrent gain‐of‐function mutations in SMO is recognized as an oncogenic driver of cancers, including basal cell carcinoma (BCC) and medulloblastoma (MB). Most Hh signaling inhibitors (HHIs) suppress Hh signaling by directly targeting SMO, vismodegib and sonidegib and have been approved by the US Food and Drug Administration for use in advanced BCC. , Investigations of other HHIs in other cancers, including pancreatic, hematological, and prostate cancer, have also yielded attractive therapeutic results. In prostate cancer, Hh pathway activity is required for the propagation and metastasis of tumors. Hh and AR signaling are intimately intertwined, and Hh ligand expression is directly suppressed by AR signaling but is released in androgen‐depleted conditions, suggesting that Hh signaling plays roles in prostate cancer cell survival after androgen ablation; in return, activated Hh signaling supports the progression of prostate cancer by sustaining the activity of AR signaling, which demonstrates the potential clinical feasibility of HHIs in treating CRPC. , However, the significance of Hh signaling in NEPC has not yet been investigated. In this study, we determined that the expression of SMO, the key transducer of Hh, was reduced/lost in NEPC and that SMO loss was associated with attenuated AR signaling, suggesting a novel role for SMO in the pathogenesis of NEPC.
MATERIALS AND METHODS 2.1 Data acquisition and processing Transcriptome and clinical data were obtained from the publicly available cBioPortal web server, the Gene Expression Omnibus database, and the Living Tumor Laboratory (LTL) database ( www.living tumor laboratory. com). A total of six datasets (from five independent studies) containing clinical AdPC and NEPC samples and two independent datasets containing patient‐derived xenograft (PDX) AdPC and NEPC samples were analyzed. The expression of selected genes was compared between AdPC and NEPC. Data on copy number variations (CNVs) were obtained from cBioPortal. An enrichment analysis was performed with GSEAv4.0.3 software, and hallmark gene sets from the Molecular Signatures Database ( http://software.broadinstitute.org/gsea/msigdb ) were used for pathway analysis. 2.2 Clinical sample collection Tissue collection protocols were approved by the Ethics Committee of Shandong Provincial Hospital affiliated to Shandong University. Due to the retrospective nature of the study, patient consent was not required. All data were analyzed anonymously. A total of seven samples, including five from patients with morphologically identified NEPC and two from patients with AdPC mixed with NEPC who visited our hospital between January 2015 and April 2020, were subjected to immunohistochemistry (IHC) analysis. Among these seven patients, two with NEPC were previously treated with hormonal therapy, three with NEPC were diagnosed with primary treatment‐naïve prostate cancer, and two with AdPC mixed with NEPC were previously treated with hormonal therapy. Samples from 22 patients with high‐grade (Gleason score ≥7) AdPC and 5 patients with AdPC with neuroendocrine differentiation were also collected as part of this study. All tissues were obtained from surgical resection or needle biopsy. 2.3 Immunohistochemistry IHC was performed using routine protocols at the Department of Pathology of our hospital. Paraffin‐embedded blocks were cut into 4‐μm serial sections, which were deparaffinized and rehydrated routinely. Antigen repair was conducted in Tris‐EDTA (pH 9.0) with high‐pressure steam for 3 min. Endogenous peroxidase activity was blocked with 3.0% hydrogen peroxide for 20 min. Then, the sections were incubated with antibodies against SMO, AR, CHGA, and synaptophysin (SYP). The list of primary antibodies and dilution ratios is given in Table . Hematoxylin and eosin staining was also conducted to obtain a clear view of the tissues. Images of these sections were acquired under a microscope and scored independently by two pathologists with more than 10 years of experience as reported previously. Briefly, the proportion score represented the proportion of positive‐staining tumor cells, and was assigned as 0, none; 1, 1⁄100; 2, 1⁄100 to 1⁄10; 3, 1⁄10 to 1⁄3; 4, 1⁄3 to 2⁄3; and 5, 2⁄3. Next, the average intensity of positive tumor cells was assigned as an intensity score as below: none (−), weak (1+), moderate (2+), and strong (3+). The proportion and intensity scores were then added to obtain a total score. 2.4 Cell culture and reagents LNCaP cells were purchased from the Cell Bank of Shanghai Institute of Cell Biology, Chinese Academy of Sciences, and C4‐2B cells were kindly provided by Xing Yutong from Nankai University. These cells were cultured in Roswell Park Memorial Institute 1640 (RPMI‐1640) (Gibco) supplemented with 10% fetal bovine serum (Gibco). RPMI‐1640 without phenol red and charcoal‐stripped fetal bovine serum was used to mimic androgen deprivation conditions in vitro. SAG, JQ1, ENZ, and DHT were purchased from Selleckchem, and were dissolved in dimethyl sulfoxide at stock concentrations of 200, 0.5, 10, and 10 µM, respectively. GANT61 (Selleckchem) was diluted in ethanol at a stock concentration of 10 mM. 2.5 Gene silencing LNCaP cells were transfected with an short hairpin RNA (shRNA) to knockdown the expression of SMO. sh‐SMO and sh‐NC (GV115 vector) recombinant lentiviruses were purchased from GeneChem. The target sequence of sh‐SMO was 5′‐ATCGCTACCCTGCTGTTAT‐3′, with the scramble RNA serving as a control (sh‐NC, 5′‐TTCTCCGAACGTGTCACGT‐3′). Transfection efficiency was monitored by determining the percentage of green fluorescent protein‐positive cells and performing real‐time polymerase chain reaction (PCR) and western blot analyses. 2.6 Gene overexpression LNCaP cells were transfected with Gli1 complementary DNA (cDNA) to overexpress Gli1. The Gli1 cDNA (GV492 vector) was purchased from GeneChem, and the transfection experiment was performed according to the manufacturer's instructions. Transfection efficiency was monitored by real‐time PCR and western blot analyses. 2.7 Western blot analysis Cell lysates were subjected to sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred to polyvinylidene difluoride membranes, which were then incubated with primary antibodies against AR, SMO, prostate‐specific antigen (PSA), Gli1, SYP, RE1 silencing transcription factor (REST), and β‐actin. All membranes were incubated with an horseradish peroxidase (HRP)‐conjugated secondary antibody. Proteins were revealed by using an Immobilon Western Chemiluminescent HRP Substrate Kit (Merck Millipore) and visualized by autoradiography. The list of primary antibodies and dilution ratios is given in Table . All western blot experiments were performed at least three times. 2.8 RNA isolation and real‐time polymerase chain reaction Total RNA was extracted from LNCaP and C4‐2B cells using TRIzol reagent (Takara) according to the manufacturer's guidelines. One microgram of total RNA was used to generate first‐strand cDNA, and the relative expression of target genes was determined after normalization against β‐actin. A list of real‐time PCR primers used in this study is shown in Table . All experiments were performed in triplicate and repeated three times. 2.9 ELISA LNCaP and C4‐2B cells were cultured in 6‐well plates until they were 60% confluent. Then the monolayers were washed three times with phosphate‐buffered saline and maintained in fresh medium for 2 days. The culture supernatants were collected and measured using an ELISA kit (JingMei Biological Engineering Co., Ltd) according to the manufacturer's instructions. The number of cells in each well was counted, and the PSA expression level in each well was normalized according to the cell number. The experiment was performed three times. 2.10 Statistical methods Comparisons between two independent groups were performed with Student's t test unless stated otherwise. The data were statistically analyzed by using SPSS software (SPSS standard, version 18.0; SPSS, Inc.). Values of p < .05 were considered statistically significant unless stated otherwise.
Data acquisition and processing Transcriptome and clinical data were obtained from the publicly available cBioPortal web server, the Gene Expression Omnibus database, and the Living Tumor Laboratory (LTL) database ( www.living tumor laboratory. com). A total of six datasets (from five independent studies) containing clinical AdPC and NEPC samples and two independent datasets containing patient‐derived xenograft (PDX) AdPC and NEPC samples were analyzed. The expression of selected genes was compared between AdPC and NEPC. Data on copy number variations (CNVs) were obtained from cBioPortal. An enrichment analysis was performed with GSEAv4.0.3 software, and hallmark gene sets from the Molecular Signatures Database ( http://software.broadinstitute.org/gsea/msigdb ) were used for pathway analysis.
Clinical sample collection Tissue collection protocols were approved by the Ethics Committee of Shandong Provincial Hospital affiliated to Shandong University. Due to the retrospective nature of the study, patient consent was not required. All data were analyzed anonymously. A total of seven samples, including five from patients with morphologically identified NEPC and two from patients with AdPC mixed with NEPC who visited our hospital between January 2015 and April 2020, were subjected to immunohistochemistry (IHC) analysis. Among these seven patients, two with NEPC were previously treated with hormonal therapy, three with NEPC were diagnosed with primary treatment‐naïve prostate cancer, and two with AdPC mixed with NEPC were previously treated with hormonal therapy. Samples from 22 patients with high‐grade (Gleason score ≥7) AdPC and 5 patients with AdPC with neuroendocrine differentiation were also collected as part of this study. All tissues were obtained from surgical resection or needle biopsy.
Immunohistochemistry IHC was performed using routine protocols at the Department of Pathology of our hospital. Paraffin‐embedded blocks were cut into 4‐μm serial sections, which were deparaffinized and rehydrated routinely. Antigen repair was conducted in Tris‐EDTA (pH 9.0) with high‐pressure steam for 3 min. Endogenous peroxidase activity was blocked with 3.0% hydrogen peroxide for 20 min. Then, the sections were incubated with antibodies against SMO, AR, CHGA, and synaptophysin (SYP). The list of primary antibodies and dilution ratios is given in Table . Hematoxylin and eosin staining was also conducted to obtain a clear view of the tissues. Images of these sections were acquired under a microscope and scored independently by two pathologists with more than 10 years of experience as reported previously. Briefly, the proportion score represented the proportion of positive‐staining tumor cells, and was assigned as 0, none; 1, 1⁄100; 2, 1⁄100 to 1⁄10; 3, 1⁄10 to 1⁄3; 4, 1⁄3 to 2⁄3; and 5, 2⁄3. Next, the average intensity of positive tumor cells was assigned as an intensity score as below: none (−), weak (1+), moderate (2+), and strong (3+). The proportion and intensity scores were then added to obtain a total score.
Cell culture and reagents LNCaP cells were purchased from the Cell Bank of Shanghai Institute of Cell Biology, Chinese Academy of Sciences, and C4‐2B cells were kindly provided by Xing Yutong from Nankai University. These cells were cultured in Roswell Park Memorial Institute 1640 (RPMI‐1640) (Gibco) supplemented with 10% fetal bovine serum (Gibco). RPMI‐1640 without phenol red and charcoal‐stripped fetal bovine serum was used to mimic androgen deprivation conditions in vitro. SAG, JQ1, ENZ, and DHT were purchased from Selleckchem, and were dissolved in dimethyl sulfoxide at stock concentrations of 200, 0.5, 10, and 10 µM, respectively. GANT61 (Selleckchem) was diluted in ethanol at a stock concentration of 10 mM.
Gene silencing LNCaP cells were transfected with an short hairpin RNA (shRNA) to knockdown the expression of SMO. sh‐SMO and sh‐NC (GV115 vector) recombinant lentiviruses were purchased from GeneChem. The target sequence of sh‐SMO was 5′‐ATCGCTACCCTGCTGTTAT‐3′, with the scramble RNA serving as a control (sh‐NC, 5′‐TTCTCCGAACGTGTCACGT‐3′). Transfection efficiency was monitored by determining the percentage of green fluorescent protein‐positive cells and performing real‐time polymerase chain reaction (PCR) and western blot analyses.
Gene overexpression LNCaP cells were transfected with Gli1 complementary DNA (cDNA) to overexpress Gli1. The Gli1 cDNA (GV492 vector) was purchased from GeneChem, and the transfection experiment was performed according to the manufacturer's instructions. Transfection efficiency was monitored by real‐time PCR and western blot analyses.
Western blot analysis Cell lysates were subjected to sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred to polyvinylidene difluoride membranes, which were then incubated with primary antibodies against AR, SMO, prostate‐specific antigen (PSA), Gli1, SYP, RE1 silencing transcription factor (REST), and β‐actin. All membranes were incubated with an horseradish peroxidase (HRP)‐conjugated secondary antibody. Proteins were revealed by using an Immobilon Western Chemiluminescent HRP Substrate Kit (Merck Millipore) and visualized by autoradiography. The list of primary antibodies and dilution ratios is given in Table . All western blot experiments were performed at least three times.
RNA isolation and real‐time polymerase chain reaction Total RNA was extracted from LNCaP and C4‐2B cells using TRIzol reagent (Takara) according to the manufacturer's guidelines. One microgram of total RNA was used to generate first‐strand cDNA, and the relative expression of target genes was determined after normalization against β‐actin. A list of real‐time PCR primers used in this study is shown in Table . All experiments were performed in triplicate and repeated three times.
ELISA LNCaP and C4‐2B cells were cultured in 6‐well plates until they were 60% confluent. Then the monolayers were washed three times with phosphate‐buffered saline and maintained in fresh medium for 2 days. The culture supernatants were collected and measured using an ELISA kit (JingMei Biological Engineering Co., Ltd) according to the manufacturer's instructions. The number of cells in each well was counted, and the PSA expression level in each well was normalized according to the cell number. The experiment was performed three times.
Statistical methods Comparisons between two independent groups were performed with Student's t test unless stated otherwise. The data were statistically analyzed by using SPSS software (SPSS standard, version 18.0; SPSS, Inc.). Values of p < .05 were considered statistically significant unless stated otherwise.
RESULTS 3.1 SMO mRNA expression is downregulated in NEPC Recent studies have shown that NEPC has a unique transcriptional profile compared to that of AdPC. To identify differences in Hh pathway genes between AdPC and NEPC, we examined the expression of six key gene components of the Hh axis (SHH, SMO, PTCH1, Gli1, Gli2, and Gli3) (Figure ) in three public RNA‐seq datasets of prostate cancer , , and identified SMO as the gene with the most significant differential (lower) expression in NEPC compared to AdPC (Figure ). Three other datasets of prostate cancer from two studies , were also analyzed, the results of which confirmed the loss of SMO in NEPC (Figure ). Similar results were also obtained in the prostate cancer PDXs from the LTL dataset (Figure ) and prostate cancer cell lines (Figure ). LTL331/331R is a unique NEPC PDX used to study the transdifferentiation process of NEPC. The changes in tumor volume are shown in Figure , which indicates disease progression. We next analyzed the dynamic changes in SMO gene expression in this model. Figure shows that the mRNA levels of AR‐related genes (AR, KLK3, TMPRSS2, NKX3‐1, and SPDEF) were decreased while the levels of neuroendocrine markers (CHGA, ENO2, SYP, and SCG3) were increased during the transdifferentiation of NEPC. Previously reported NE‐associated TFs (ASCL1, SOX2, PEG10, BRN2, and SRRM4) , , , , were also analyzed in this model and exhibited increased expression to varying degrees at different time points (Figure ). Interestingly, the expression of SMO remained stable for 12 weeks postcastration and was markedly downregulated at the terminal time points of NEPC relapse. The profile of SMO expression dynamics was highly similar to those of REST and YAP1, which were recently reported to be selectively lost in NEPC, , thus suggesting a role of SMO loss in the emergence of NEPC. Analysis of CNVs in these datasets revealed no evidence of copy number loss for SMO in NEPC (Figure ), indicating a possible epigenomic mechanism underlying the downregulation of SMO. BRD4, a member of the bromodomain and extraterminal (BET) family, is an epigenetic reader that recognizes acetylated lysine residues of histone proteins and acts as a transcriptional regulator. Reportedly, BRD4 plays an important role in prostate cancer progression, and Hh signaling was shown to be an epigenetic target of BET in a mouse cell line. To uncover the upstream signaling governing SMO expression in prostate cancer, we treated LNCaP and C4‐2B cells with a BET inhibitor (JQ1). Intriguingly, as expected, JQ1 treatment downregulated the SMO mRNA and protein levels in both cell lines (Figure ), suggesting that SMO expression is positively regulated by BET proteins. 3.2 Loss of SMO protein expression is prevalent in NEPC but rare in high‐grade AdPC Given the specific downregulation of SMO mRNA in NEPC, we next assessed SMO protein levels in human prostate cancer tissues by IHC. The percentages and intensities of SMO, AR, SYP, and CHGA staining are summarized in Table and Figure . Among the morphologically identified cases of NEPC, 100% (5 of 5) showed complete loss of SMO, and heterogeneous positive staining for the classic NE markers SYP, and CHGA (Table ; Figure ). In contrast, only 9% (2 of 22) of the high‐grade AdPC showed loss of SMO, and most AdPC tissues showed weak to strong staining of SMO (Table ; Figure ). Interestingly, in two tissue samples of AdPC mixed with NEPCs, the NEPC components showed SMO loss, while the adenocarcinoma components showed strong positivity (Figure ), AR/SMO and SYP/CHGA staining were mutually exclusive. Moreover, the tissue samples of AdPC with neuroendocrine differentiation tissues were positively stained for SMO, AR, SYP, and CHGA in luminal cancer cells (Figure ). Collectively, these results demonstrate that SMO expression is lost specifically in NEPC tissues. 3.3 Gene set enrichment analysis of global transcriptomic variations associated with SMO expression The downregulated SMO expression in NEPC suggests that SMO may play roles in the transdifferentiation of NEPC. To confirm this, we performed GSEA to evaluate the global variations in SMO gene expression based on the data from the six published datasets listed in Figure . An SMO mRNA expression level below the lower quartile was defined as “SMO‐lo”; otherwise, it was defined as “SMO‐hi.” Gene sets with a normalized enrichment score >1.50 or <−1.50 and a q value ≤.002 were considered significant, as shown in Figures and . In the six datasets, the most frequent molecular signatures that were positively associated with SMO expression were “ANDROGEN_RESPONSE” (6/6) and “CHOLESTER_HOMEOSTASIS” (5/6), while the most frequent signatures that were negatively associated with SMO expression were “E2F_TARGETS” (6/6), “PANCREAS_BETA_CELLS” (5/6), and “G2M_CHECKPOINT” (4/6) (Figure ). These results indicate that SMO downregulation is associated with several key signatures of NEPC, suggesting that SMO is a barrier in the transdifferentiation of AdPC and that SMO loss may drive the emergence of the NEPC phenotype by involving these pathways. 3.4 SMO knockdown inhibits AR signaling activity and reduces AR expression The above results show that the “ANDROGEN_RESPONSE” is the most negatively enriched signature of SMO expression (Figure ), suggesting that an association exists between SMO and AR signaling. A previous study by Chen reported that AR suppressed the expression of Hh ligands in prostate cancer cells, which was confirmed by our herein, as exogenous androgen deprivation relieved the repression of SHH expression (Figure ). In contrast, SMO expression was not significantly different between the androgen (DHT) and androgen antagonist (ENZ) treatment groups in our study (Figure ), and similar results were obtained in the prostate cancer PDXs from the LTL dataset (Figure ). Together, these results suggest that SMO, unlike its ligand SHH, is not a regulatory target of AR. Next, we assessed the effect of SMO loss on AR signaling by constructing stable SMO‐knockdown LNCaP and C4‐2B cell lines with a lentiviral‐delivered shRNA (termed sh‐SMO). As shown in Figure , AR, and PSA expression was decreased in sh‐SMO cells compared with control cells (sh‐NC), and the expression levels of NE markers were not significantly different, which was consistent with our GSEA results (Figure ). Because SMO reportedly exerts its effects mainly by activating Gli TFs, we next assessed the effects of the Gli antagonist GANT61 and the Hh signaling agonist SAG on AR expression in sh‐SMO cells. Figure shows that GANT61 further reduced AR and PSA protein expression, while SAG treatment partially restored AR and PSA expression, suggesting that Gli TFs are important for the regulation of AR signaling by SMO. Gli1 is a well‐recognized downstream target as well as an effector TF of the Hh pathway, and we consistently found that Gli1 expression was downregulated in sh‐SMO cells (Figure ). Notably, as shown in Figure , overexpression of Gli1 in SMO‐knockdown cells rescued the expression of AR and PSA, indicating that Gli1 is the downstream effector of SMO that is involved in the regulation of AR signaling.
SMO mRNA expression is downregulated in NEPC Recent studies have shown that NEPC has a unique transcriptional profile compared to that of AdPC. To identify differences in Hh pathway genes between AdPC and NEPC, we examined the expression of six key gene components of the Hh axis (SHH, SMO, PTCH1, Gli1, Gli2, and Gli3) (Figure ) in three public RNA‐seq datasets of prostate cancer , , and identified SMO as the gene with the most significant differential (lower) expression in NEPC compared to AdPC (Figure ). Three other datasets of prostate cancer from two studies , were also analyzed, the results of which confirmed the loss of SMO in NEPC (Figure ). Similar results were also obtained in the prostate cancer PDXs from the LTL dataset (Figure ) and prostate cancer cell lines (Figure ). LTL331/331R is a unique NEPC PDX used to study the transdifferentiation process of NEPC. The changes in tumor volume are shown in Figure , which indicates disease progression. We next analyzed the dynamic changes in SMO gene expression in this model. Figure shows that the mRNA levels of AR‐related genes (AR, KLK3, TMPRSS2, NKX3‐1, and SPDEF) were decreased while the levels of neuroendocrine markers (CHGA, ENO2, SYP, and SCG3) were increased during the transdifferentiation of NEPC. Previously reported NE‐associated TFs (ASCL1, SOX2, PEG10, BRN2, and SRRM4) , , , , were also analyzed in this model and exhibited increased expression to varying degrees at different time points (Figure ). Interestingly, the expression of SMO remained stable for 12 weeks postcastration and was markedly downregulated at the terminal time points of NEPC relapse. The profile of SMO expression dynamics was highly similar to those of REST and YAP1, which were recently reported to be selectively lost in NEPC, , thus suggesting a role of SMO loss in the emergence of NEPC. Analysis of CNVs in these datasets revealed no evidence of copy number loss for SMO in NEPC (Figure ), indicating a possible epigenomic mechanism underlying the downregulation of SMO. BRD4, a member of the bromodomain and extraterminal (BET) family, is an epigenetic reader that recognizes acetylated lysine residues of histone proteins and acts as a transcriptional regulator. Reportedly, BRD4 plays an important role in prostate cancer progression, and Hh signaling was shown to be an epigenetic target of BET in a mouse cell line. To uncover the upstream signaling governing SMO expression in prostate cancer, we treated LNCaP and C4‐2B cells with a BET inhibitor (JQ1). Intriguingly, as expected, JQ1 treatment downregulated the SMO mRNA and protein levels in both cell lines (Figure ), suggesting that SMO expression is positively regulated by BET proteins.
Loss of SMO protein expression is prevalent in NEPC but rare in high‐grade AdPC Given the specific downregulation of SMO mRNA in NEPC, we next assessed SMO protein levels in human prostate cancer tissues by IHC. The percentages and intensities of SMO, AR, SYP, and CHGA staining are summarized in Table and Figure . Among the morphologically identified cases of NEPC, 100% (5 of 5) showed complete loss of SMO, and heterogeneous positive staining for the classic NE markers SYP, and CHGA (Table ; Figure ). In contrast, only 9% (2 of 22) of the high‐grade AdPC showed loss of SMO, and most AdPC tissues showed weak to strong staining of SMO (Table ; Figure ). Interestingly, in two tissue samples of AdPC mixed with NEPCs, the NEPC components showed SMO loss, while the adenocarcinoma components showed strong positivity (Figure ), AR/SMO and SYP/CHGA staining were mutually exclusive. Moreover, the tissue samples of AdPC with neuroendocrine differentiation tissues were positively stained for SMO, AR, SYP, and CHGA in luminal cancer cells (Figure ). Collectively, these results demonstrate that SMO expression is lost specifically in NEPC tissues.
Gene set enrichment analysis of global transcriptomic variations associated with SMO expression The downregulated SMO expression in NEPC suggests that SMO may play roles in the transdifferentiation of NEPC. To confirm this, we performed GSEA to evaluate the global variations in SMO gene expression based on the data from the six published datasets listed in Figure . An SMO mRNA expression level below the lower quartile was defined as “SMO‐lo”; otherwise, it was defined as “SMO‐hi.” Gene sets with a normalized enrichment score >1.50 or <−1.50 and a q value ≤.002 were considered significant, as shown in Figures and . In the six datasets, the most frequent molecular signatures that were positively associated with SMO expression were “ANDROGEN_RESPONSE” (6/6) and “CHOLESTER_HOMEOSTASIS” (5/6), while the most frequent signatures that were negatively associated with SMO expression were “E2F_TARGETS” (6/6), “PANCREAS_BETA_CELLS” (5/6), and “G2M_CHECKPOINT” (4/6) (Figure ). These results indicate that SMO downregulation is associated with several key signatures of NEPC, suggesting that SMO is a barrier in the transdifferentiation of AdPC and that SMO loss may drive the emergence of the NEPC phenotype by involving these pathways.
SMO knockdown inhibits AR signaling activity and reduces AR expression The above results show that the “ANDROGEN_RESPONSE” is the most negatively enriched signature of SMO expression (Figure ), suggesting that an association exists between SMO and AR signaling. A previous study by Chen reported that AR suppressed the expression of Hh ligands in prostate cancer cells, which was confirmed by our herein, as exogenous androgen deprivation relieved the repression of SHH expression (Figure ). In contrast, SMO expression was not significantly different between the androgen (DHT) and androgen antagonist (ENZ) treatment groups in our study (Figure ), and similar results were obtained in the prostate cancer PDXs from the LTL dataset (Figure ). Together, these results suggest that SMO, unlike its ligand SHH, is not a regulatory target of AR. Next, we assessed the effect of SMO loss on AR signaling by constructing stable SMO‐knockdown LNCaP and C4‐2B cell lines with a lentiviral‐delivered shRNA (termed sh‐SMO). As shown in Figure , AR, and PSA expression was decreased in sh‐SMO cells compared with control cells (sh‐NC), and the expression levels of NE markers were not significantly different, which was consistent with our GSEA results (Figure ). Because SMO reportedly exerts its effects mainly by activating Gli TFs, we next assessed the effects of the Gli antagonist GANT61 and the Hh signaling agonist SAG on AR expression in sh‐SMO cells. Figure shows that GANT61 further reduced AR and PSA protein expression, while SAG treatment partially restored AR and PSA expression, suggesting that Gli TFs are important for the regulation of AR signaling by SMO. Gli1 is a well‐recognized downstream target as well as an effector TF of the Hh pathway, and we consistently found that Gli1 expression was downregulated in sh‐SMO cells (Figure ). Notably, as shown in Figure , overexpression of Gli1 in SMO‐knockdown cells rescued the expression of AR and PSA, indicating that Gli1 is the downstream effector of SMO that is involved in the regulation of AR signaling.
DISCUSSION The development of highly aggressive NEPC in prostate cancer patients undergoing treatment with stringent AR blockade is a critical clinical issue. It is imperative to investigate the molecular mechanisms of NEPC pathogenesis and develop effective therapies for this lethal disease. In this study, we found that the expression of SMO, a key factor in Hh signaling, is dramatically downregulated or lost in NEPC by analyzing multiple public datasets of advanced prostate cancer. We confirmed this finding in clinical prostate cancer specimens obtained from our hospital by IHC analysis. In vitro experiments showed that knocking down SMO in LNCaP and C4‐2B cells downregulated the activity of AR signaling and AR gene expression, implying a supporting role of SMO loss in the transition from the AR‐positive luminal phenotype to the AR‐negative neural phenotype in prostate cancer. SMO is a key transmembrane protein receptor of Hh signaling, and it was identified as an oncogene and a therapeutic target in BCC and MB. , In prostate cancer, Hh signaling was reported to play a driving role in the development of androgen blockade resistance, and HHIs targeting SMO were investigated as treatments for CRPC. , However, our results demonstrated that SMO was downregulated in NEPC compared with AdPC, suggesting a novel role of SMO in inhibiting the development of NEPC. As the widespread clinical application of potent APIs in CRPC led to the rising incidence of NEPC, the risk of HHIs causing NEPC should be reconsidered. Currently, accumulating evidence indicates that the transition from AdPC to NEPC is a continuum encompassing a spectrum of intermediate cellular differentiation states. Distinct molecular drivers of NEPC may exert their roles at different time points; for example, in the LTL331/331R PDX model, PEG10 was found to be activated in the very early stage of NEPC, while we found that SMO expression was decreased in the late stage (Figure ), similar to the dynamic changes in REST and YAP1, two other repressors of NEPC development. , Our IHC data revealed that SMO protein expression was detectable in most AdPC tissues, two AdPC tissues with NE differentiation, and two AdPC loci with a mixture of AdPC and NEPC but was lost in NEPC tissues. These distinct temporal and spatial expression patterns of SMO suggest that it may be a useful biomarker of terminally differentiated NEPC cells, and additional studies with a large panel of clinical samples are warranted. AR signaling plays a principal role in prostate cancer development and progression to CRPC and is notably attenuated in NEPC. Our GSEA results indicated that “ANDROGEN_RESPONSE” had an extremely significant positive correlation with SMO expression. The relationship between AR and Hh signaling is complicated, as Hh signaling is suppressed by AR signaling under normal conditions, while Hh signaling is de‐repressed under androgen deprivation conditions, which plays a compensatory role in AR signaling to promote the survival of prostate cancer cells. Our in vitro experiments using the sh‐SMO LNCaP and C4‐2B cell lines showed that SMO was closely related to AR signaling activity and AR gene expression, which is consistent with previous reports that Hh signaling supports the activity of AR signaling in androgen‐deprived and androgen‐independent prostate cancers. We further found that the regulation of AR signaling by SMO was mediated by Gli1. These results provide new insights into the regulation of AR signaling and suggest that SMO acts as a barrier to the lineage transition of AdPC and that SMO loss may drive the transdifferentiation of NEPC characterized by the suppressing of AR signaling. Our GSEA result is very interesting and implicative as it also indicated that SMO loss is associated with several other crucial molecular signatures of NEPC. “CHOLESTER_HOMEOSTASIS” was the second most frequent molecular signature positively correlated with SMO expression, and a close link exists among cholesterol, Hh signaling, and AR signaling. , , , Recently, a mass spectrometry‐based proteomic analysis revealed reduced expression level of proteins involved in lipid biosynthesis in NEPC PDXs, suggesting a repressive role of lipid biosynthesis in NEPC pathogenesis. Given that NEPC is AR independent and resistant to APIs, how SMO loss affects steroidogenesis and androgen synthesis in prostate cancer cells is worthy of investigation. Our GSEA results also indicated that SMO loss is associated with the following enhanced molecular signatures: “E2F_TARGETS” and “G2M_CHECKPOINT.” Previous studies have reported aberrant features of cell cycle regulation, particularly increases in E2F activity and G2M checkpoint dysregulation that distinguish NEPC from CRPC. Thus, we hypothesized that SMO loss is involved in the transdifferentiation of NEPC by remodeling prostate cancer cell proliferation. However, we did not observe a significant difference in cell proliferation rates between sh‐NC and sh‐SMO LNCaP cells (data not shown). Kaplan–Meier survival analysis of the Abida dataset suggested that SMO loss was correlated with poor overall survival, although the difference was not statistically significant (Figure ). We hypothesized that cell proliferation is not a direct downstream target of SMO, and that other factors are involved; however, further research is required. In summary, our results indicate that SMO loss is characteristic of NEPC. SMO acts as a barrier to AdPC transdifferentiation and SMO loss may be drive NEPC by dysregulating AR signaling in a Gli1‐dependent manner. Thus, these results serve as a reminder to use caution when suing HHIs to treat CRPC.
The authors declare that there are no conflict of interests.
Lili Wang, Chunxiao Wu, and Zhiming Lu conceived the experiments. Lili Wang, Haiying Li, Zhang Li, Ming Li, and Qi Tang carried out the experiments and analyzed data. Lili Wang wrote the manuscript under the guidance of Chunxiao Wu and Zhiming Lu.
Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file.
|
Benefits of Exome Sequencing in Children with Suspected Isolated Hearing Loss | 89427f5d-0fc9-4160-83a1-7eaca85eceef | 8391342 | Pathology[mh] | Almost one in 500 infants is affected by hearing loss (HL) . The prevalence increases dramatically with age in adults, and it has been estimated that approximately one-half of all adults aged between 60 and 69 years and 80% of those over 80 years old suffer from HL . More than one-half of congenital or early-onset, bilateral sensorineural (SN) cases are believed to have a genetic cause, with the remainder either acquired or idiopathic . Genetic etiologies are further divided into isolated or syndromic HL (associated with dysmorphic features and/or additional medical problems). However, it is often difficult to distinguish between syndromic and non-syndromic forms at an early age, as other signs and symptoms may appear only later in life . Hearing screenings are recommended for newborns, as early detection and diagnosis of HL has been proven to improve health outcomes . Universal newborn hearing screening was recommended in 1998 in the European Consensus Statement on Neonatal Hearing Screening in Newborns and introduced in Switzerland in 1999 under the auspices of the “Swiss Working Group: Hearing Screening in Newborns”. Screening in Switzerland is performed by estimating otoacoustic emissions (OAE) during the newborn’s stay at the maternity unit. If the OAE test fails in one or both ears, further evaluation is recommended, usually starting by repeating OEA measures the same or the following day. If OAEs are still undetectable uni- or bi- laterally after repeated measures, the infant is referred for additional investigations within the first months of life. Exclusion of congenital cytomegalovirus infection by polymerase chain reaction analysis on a “Guthrie” (‘blood spot’) card is part of the routine diagnostic evaluation. Ear-nose-throat (ENT) specialists determine the type and degree of HL through audiometry. In the event of rapidly progressive or sudden or unilateral deafness, ear imaging is performed using a computed tomography (CT) scan of the temporal bone in thin sections and magnetic resonance imaging (MRI) of the inner ear and auditory pathways . Before the emergence and accessibility of broad genetic testing to identify syndromic HL forms, etiological assessment of HL often included additional examinations, such as thyroid assessment, electrocardiogram, urinary tract ultrasound and ophthalmologic examination. These examinations are no longer requested at the time of initial HL diagnosis, but they are performed if necessary, depending on the genetic assessment. In the absence of an environmental cause, international recommendations emphasize the importance of an early genetic analysis as a first-line assessment of a hearing disorder , as a molecular diagnosis can help guide management and counseling. However, one of the challenges of molecular analysis resides in the high degree of genetic heterogeneity . Indeed, over 70 genes have been associated with isolated SN HL and with limited phenotypic clues. Thus, it is impossible to target a particular gene based on the phenotype alone. This may also hold true for syndromic cases as infants. Indeed at early age, HL may be the first sign of an underlying syndromic form, and therefore, infants affected by syndromic HL may be referred to a genetic consultation with isolated HL. Children affected by HL can greatly benefit from next-generation-sequencing in order to identify a genetic cause. To date, 30 genes are known to be associated with late onset or progressive HL . As there is a high prevalence of HL in adults, it is important to identify those with a genetic etiology, as this might influence their management and permit early counseling. The purpose of this study was to describe the outcome of molecular analyses performed in 70 cases with a HL diagnosis, including a brief clinical description of confirmed cases. 2.1. Patients A total of 61 children and 9 adults with an HL diagnosis were referred to the molecular laboratory of the Division of Medical Genetics at Geneva University Hospitals (Geneva, Switzerland) between January 2017 and December 2020. Following written informed consent obtained from all adult patients and the parents or guardians of children, as well as health insurance approval, patients underwent whole-exome sequencing (WES) with a bioinformatic analysis of HL and an ear malformation updated panel ranging from 172 to 189 genes, including GJB2/GJB6. Genes were selected according to the PanelApp classification of genes involved in hearing loss (Genomics England PanelApp. Available online: https://panelapp.agha.umccr.org/panels/209/ (accessed on 1 August 2021)). Of the 222 indicated genes, all green and litterature-relevant orange genes were chosen to build our in house gene panel . 2.2. DNA DNA from affected individuals and family members was extracted from whole blood. Exome sequencing was captured using one of the following kits according to the manufacturers’ recommendations: SureSelect QXT All Human Exon v5, v7 (Agilent Technologies, Santa Clara, CA, USA) or Twist Core exome + Refseq kits (Twist Biosciences, South San Francisco, CA, USA). Paired-end sequencing was carried out on a NextSeq500 Instrument (Illumina, San Diego, CA, USA). Targeted bioinformatic analysis of a panel of genes involved in HL was performed through locally-developed pipelines. Reads mapping and variant calling were performed using BWA V0.7.13, Picard V2.9.0 and GATK Haplotype-Caller V3.7 and annotated with annovar V2017-07-12 and UCSC RefSeq (V2018-08-10). The variants were searched for in various databases including dbSNP151, gnomAD 2.1, ClinVar 2018 and HGMD 2016. Pathogenicity prediction scores were assessed using dbscSNV and SpliceAI. Variant filtering and classification was performed based on the guidelines for the interpretation of sequence variants from the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology . Sanger sequencing of custom-designed amplicons was used to confirm potentially disease-causing variants in probands and to perform segregation analysis. Pathogenicity scores were obtained for missense and splicing variants using SIFT, PolyPhen, MutationTaster, CADD, pAI, dbscSNV, spliceAI. Intermediate results were discussed in a multidisciplinary team including geneticists, biologists and ENT specialists. Copy number variation (CNV) detection on exome data was performed using an in-house read depth-based algorithm combining CoNIFER and XHMM . CoNIFER calculates normalized read-depth z-scores for each exon, generating a matrix of dimension n × M, with n being the number of captured exons and M the total number of samples. This matrix is then fed to XHMM, which uses hidden markov models to find streches of duplicated or deleted exons. Duplications and deletions overlapping genes of the hearing-loss panel are visually reviewed using a private instance of jBrowse , together with the per-exon read-depth z-scores. CNV calls detected by our algorithm were independently confirmed by multiplex ligation-dependent probe amplification (MLPA) analysis. In Switzerland, patient access to WES is dependent on approval by the health insurance company. In some cases, GJB2-GJB6 analysis is asked for beforehand, even if present in our exome gene panel, as it is one of the most frequently involved genes in non-syndromic HL . Four additional children and one adult were found to carry GJB2 mutations by this strategy and were therefore not included in our exome cohort. A total of 61 children and 9 adults with an HL diagnosis were referred to the molecular laboratory of the Division of Medical Genetics at Geneva University Hospitals (Geneva, Switzerland) between January 2017 and December 2020. Following written informed consent obtained from all adult patients and the parents or guardians of children, as well as health insurance approval, patients underwent whole-exome sequencing (WES) with a bioinformatic analysis of HL and an ear malformation updated panel ranging from 172 to 189 genes, including GJB2/GJB6. Genes were selected according to the PanelApp classification of genes involved in hearing loss (Genomics England PanelApp. Available online: https://panelapp.agha.umccr.org/panels/209/ (accessed on 1 August 2021)). Of the 222 indicated genes, all green and litterature-relevant orange genes were chosen to build our in house gene panel . DNA from affected individuals and family members was extracted from whole blood. Exome sequencing was captured using one of the following kits according to the manufacturers’ recommendations: SureSelect QXT All Human Exon v5, v7 (Agilent Technologies, Santa Clara, CA, USA) or Twist Core exome + Refseq kits (Twist Biosciences, South San Francisco, CA, USA). Paired-end sequencing was carried out on a NextSeq500 Instrument (Illumina, San Diego, CA, USA). Targeted bioinformatic analysis of a panel of genes involved in HL was performed through locally-developed pipelines. Reads mapping and variant calling were performed using BWA V0.7.13, Picard V2.9.0 and GATK Haplotype-Caller V3.7 and annotated with annovar V2017-07-12 and UCSC RefSeq (V2018-08-10). The variants were searched for in various databases including dbSNP151, gnomAD 2.1, ClinVar 2018 and HGMD 2016. Pathogenicity prediction scores were assessed using dbscSNV and SpliceAI. Variant filtering and classification was performed based on the guidelines for the interpretation of sequence variants from the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology . Sanger sequencing of custom-designed amplicons was used to confirm potentially disease-causing variants in probands and to perform segregation analysis. Pathogenicity scores were obtained for missense and splicing variants using SIFT, PolyPhen, MutationTaster, CADD, pAI, dbscSNV, spliceAI. Intermediate results were discussed in a multidisciplinary team including geneticists, biologists and ENT specialists. Copy number variation (CNV) detection on exome data was performed using an in-house read depth-based algorithm combining CoNIFER and XHMM . CoNIFER calculates normalized read-depth z-scores for each exon, generating a matrix of dimension n × M, with n being the number of captured exons and M the total number of samples. This matrix is then fed to XHMM, which uses hidden markov models to find streches of duplicated or deleted exons. Duplications and deletions overlapping genes of the hearing-loss panel are visually reviewed using a private instance of jBrowse , together with the per-exon read-depth z-scores. CNV calls detected by our algorithm were independently confirmed by multiplex ligation-dependent probe amplification (MLPA) analysis. In Switzerland, patient access to WES is dependent on approval by the health insurance company. In some cases, GJB2-GJB6 analysis is asked for beforehand, even if present in our exome gene panel, as it is one of the most frequently involved genes in non-syndromic HL . Four additional children and one adult were found to carry GJB2 mutations by this strategy and were therefore not included in our exome cohort. 3.1. Cohort Descriptions 3.1.1. Children Sixty-one children (female, 26; male, 35; age range, 13 months to 18 years) and nine adults (female, 6; male, 3; age range, 34–78 years) benefited from a molecular analysis for HL. Most presented with SN HL (52 children (85.2%); 8 adults (88.9%)). Six children (9.8%) had mixed (conductive and SN) HL, two children (3.3%) and one adult (11.1%) had transmission HL, and one child had right SN HL and left mixed HL. HL severity among children was as follows: eight had mild HL (13.1%); 35 had moderate HL (57.4%); 10 had severe HL (16.4%); eight had profound HL (13.1%); and 6 showed progressive HL (9.8%). (Severity was defined as mild: 26–40 db hearing loss; moderate: 41–70 db hearing loss; severe: 71–90 db hearing loss and profound >91 db hearing loss). The majority of children (53/61 (86.8%)) had bilateral HL. Congenital HL was diagnosed in 36/61 (59%) cases. Eleven (18%) children were diagnosed with prelingual HL (defined as identified at ≤1 year of age) and the remainder (14 patients (22.9%)) had postlingual HL (defined as identified at >1 year of age). Forty-one had no family history (67.2%) and only four probands (6.5%) were born from consanguineous parents. Of the 41 patients who underwent a CT scan and/or MRI investigation, 15 (36.6%) had middle/inner ear malformations. Twenty-one patients (34.4%) had other signs and symptoms in addition to HL . 3.1.2. Adults Four adults had moderate HL (44.4%), four had severe HL (44.4%), and one had profound HL (11.1%); all cases showed progressive HL . Three patients experienced a violent worsening of HL associated with upper airway infection for one patient and vertigo for the second case. The third patient did not recall any infection or vertigo associated with the onset of worsening of HL. All had bilateral HL at the time of consultation, but three patients had marked asymmetry at diagnosis. All were diagnosed with postlingual HL, but the age of onset was highly variable (8 to 65 years). Five patients had a family history (55.6%), three patients had no family history (33.3%), and one patient was adopted (no family history available). Two probands (22.2%) were born from consanguineous parents. Seven (77.8%) were investigated through CT scan and/or MRI, which revealed that four (44.4%) had middle/inner ear malformations. Interestingly, six of nine (66.7%) patients had additional symptoms to HL, with vertigo being the most frequent . 3.1.3. Patients Identified through Direct Sequencing of GJB2/GJB6 All four children that were diagnosed through direct sequencing of GJB2/6 presented with SN, bilateral HL. Three had congenital HL and one was diagnosed with pre lingual HL. Three had severe HL and one had moderate HL. None were known with any family history or consanguinity. One adult was diagnosed through direct sequencing of GJB2/6. He displayed congenital, SN, bilateral profound HL. Three of the patient were males and two were females, out of wich one was an adult. 3.2. Molecular Results 3.2.1. Children Among the 61 cases investigated through WES, molecular confirmation was performed in 32 probands (52.5%) with the involvement of 22 different genes . Five patients (patients 1, 33, 42, 46,51), showed a variant of unknown significance (VUS) . Of note, patient 1 had a likely pathogenic de novo mutation in COL4A5 and two missense variants in compound heterozygosity in COL11A2 classified as VUS. . These variants did not fulfill ACMG criteria and were not counted or reported as positive, even if highly concordant with the patient’s phenotype. This was mostly due to the inherited status of the variant from a parent with normal audition or because segregation analysis was not possible. Seventeen of 32 (53.1%) patients had autosomal recessive inheritance patterns; 14 (42%) had an autosomal dominant disorder, and one case had X-linked HL . Among the 14 autosomal dominant cases, nine were reported de novo, three were inherited from a healthy parent, and one was inherited from an affected parent. One patient (# 21) had one HL variant ( POU4F3 ) inherited from an affected father and a de novo incidental finding in OPA1 . Another patient (# 24) had a de novo causative variant in COL11A1 and an inherited SMAD3 variant from an affected mother . Of the 32 children with a positive molecular diagnostic test, 17 (53.2%) had mutations in non-syndromic HL-associated genes, of which 14 were autosomal recessive (43.8%). Fifteen (46.9%) cases had pathogenic variants in syndromic HL-associated genes, of which 11 were transmitted in an autosomal dominant pattern (32.4%). Seven patients were counted as syndromic, but did not display any other sign apart from HL at the time of diagnosis . The most common HL causative genes were STRC (5 cases), ACTG 1 (3 cases), COL11A1 (3 cases), and GJB2 (3 cases) . Among these, only COL11 A1 is responsible for both syndromic and non-syndromic HL. Four additional children were diagnosed with a GJB2 mutation through direct sequencing . Three patients were compound heterozygotes for a STRC point mutation and carried a STRC deletion on the other allele. Two other patients showed bi-allelic deletion of the STRC gene. One case was caused by a heterozygous gene conversion on one allele and CNV on the other; one patient showed heterozygous deletion of COL11A1 . A total of seven cases were caused by CNV (21.9%). 3.2.2. Adults Among the nine adults that underwent molecular investigations, two had a molecular diagnosis . One was diagnosed with neurofibromatosis type 2 and the other displayed a variant in the COCH gene. Both patients followed an autosomal dominant inheritance pattern. No family segregation was available and therefore it was not possible to conclude on a de novo or inherited status of these variants . One patient was diagnosed with GJB2 variants by direct sequencing ( (# 75)) and another showed a rare missense variant in the TBC1D24 gene ( (# 64)). The latter variant was classified as a VUS based on ACMG criteria. 3.3. Brief Description of Individual Cases Confirmed by Molecular Diagnosis 3.3.1. COL4A5 Patient 1. A six-year-old female presented with language delay, mild left and moderate right HL, associated with cochlear malformation. She had a conventional binaural behind-the-ear (BTE) hearing aid. No relevant family history was reported. WES identified a heterozygous likely pathogenic de novo missense variant in COL4A5 (c.1525G > C, p.(Gly509Arg)). The COL4A5 gene is associated with X-linked Alport syndrome characterized by SN HL, as well as ocular and kidney involvement. Females with COL4A5 mutation can display HL, but it is usually less frequent and tends to occur in later life . Nevertheless, a nephrology follow-up was organized, given the risk of renal complications in these patients . WES also identified two missense variants in COL11A2 classified as VUS and described below. 3.3.2. USH1G Patient 2. A 21-month-old female was diagnosed with profound bilateral SN HL with no relevant family history. WES identified a homozygous missense variant c.1373A > T, p.(Asp458Val) in USH1G and parental segregation was confirmed. USH1G is responsible for Usher syndrome type 1, an autosomal recessive condition that associates a congenital, profound SN HL, vestibular areflexia, and adolescent-onset retinitis pigmentosa. She benefited from a sequential bilateral cochlear implantation and a routine ophthalmologic evaluation . The ophthalmological check-up revealed a pathological electroretinogram and close follow-up is ongoing. 3.3.3. GJB2 Patient 3. A four-year-old male was diagnosed with bilateral, moderate, congenital SN HL, with no relevant family history. He had a conventional binaural BTE hearing aid. WES revealed a deletion c.35delG and a heterozygous missense c.101T > C, p.(Met34Thr) variant of GJB2 . Parental segregation confirmed that mutations were in trans. Patient 8. An eight-year-old male with congenital moderate SN HL, a conventional binaural BTE hearing aid and no relevant family history. An inner ear CT scan was normal. WES identified a compound heterozygous c.35del, p.(Gly12Valfs*2); c.139G > T, p.(Glu47*) in GJB2 . Parental segregation confirmed that mutations were in trans. Patient 20. A two-year-old boy with congenital severe, bilateral SN HL. He also presented palmoplantar keratoderma. Family history was unremarkable. An inner ear CT scan showed dilatation of the internal auditory canals and inner ear malformation. He benefited from a sequential bilateral cochlear implantation. WES revealed a pathogenic heterozygous de novo c. 223C > T, p.(Arg75Trp) in GJB2 . Missense in this residue is associated with autosomal dominant HL and palmoplantar keratoderma . 3.3.4. SIX1 Patient 4. An 11-year-old male with moderate SN HL on the right side and mixed, profound HL on the left side, and no family history of HL. He had bilateral inner ear malformations revealed by CT scan and left side congenital cholesteatoma. At physical examination, he presented with a pre-auricular pit on the left side. He had a conventional binaural BTE hearing aid. WES revealed a heterozygous de novo SIX1 pathogenic missense mutation (c.386A > C, p.(Tyr129Ser)). SIX1 is associated with branchiootorenal syndrome, which is characterized by branchial arch anomalies, hearing impairment (malformations of the auricle with pre-auricular pits and conductive or SN hearing impairment), and renal malformations . Follow-up was completed with renal ultrasonography, which was normal. 3.3.5. LARS2 Patient 5. An eight-year-old female with postlingual profound, bilateral, SN HL and no relevant family history. She benefited from a cochlear implantation on the right side. WES showed two compound heterozygous mutations (c.457A > C, p.(Asn153His) and c.1565C > A, p.(Thr522Asn)) in LARS2. Mutations were classified as likely pathogenic and pathogenic respectively. LARS2 is associated with Perrault syndrome, which is characterized by SN HL in males and females and ovarian dysfunction in females. Pubertal development will be monitored in the future in order to induce puberty and permit normal bone mineralization. In the case of ovarian insufficiency, oocyte cryopreservation will be considered . Follow-up was completed with ovarian ultrasonography and an endocrinological follow-up was organized. 3.3.6. ILDR1 Patient 6. A 10-year-old male with congenital, profound, bilateral SN HL. He benefited from a unilateral cochlear implantation. Apart from being born into a consanguineous union, he had no other relevant family history. WES identified a homozygous nonsense mutation (c.942C > A, p.(Cys314*)) in ILDR1 , classified as pathogenic. Mutations in this gene are known to cause a prelingual, nonprogressive, nonsyndromic form of SN deafness . 3.3.7. ACTG1 Patient 7. A five-year-old male with postlingual unilateral (left) mild mixed HL. He had no relevant family history and no other health problems. The CT scan showed an uncus malformation on both sides, but normal inner ears. WES identified a heterozygous, likely pathogenic de novo mutation, c.440G > A, p.(Arg147His) in the ACTG1 gene. Patient 16. A 15-year-old female with postlingual, bilateral, moderate SN HL. She had a binaural conventional BTE hearing aid. No relevant family history was noted. WES revealed a heterozygous, likely pathogenic de novo mutation, c.826G > A, p.(Glu276Lys) in ACTG1 . Patient 30. A 17-year-old male with an initial mild SN postlingual HL that had progressed to moderate HL. Both his paternal grandmother and his maternal grandfather showed late onset HL. He had a bilateral conventional BTE hearing aid since the age of 16 years. WES identified a heterozygous, pathogenic de novo mutation, c.830C > T,p.(Thr277Ile) in ACT G1. ACTG1 variants are responsible for DFNA20/DFNA26, usually associated with postlingual and progressive SN HL and a type 2 Baraitser-Winter syndrome. Our patients did not have any syndromic features to date and thus we considered that these mutations were related to autosomal dominant deafness 20/26 (MIM: 604717) . 3.3.8. GATA3 Patient 9. A five-year-old male with congenital moderate SN HL and bilateral renal cysts. He had a binaural conventional BTE hearing aid. No relevant family history was noted. WES identified a pathogenic heterozygous de novo mutation c.778 + 1G > A, p.?, in GATA3 . Patient 19. An 18-month-old male with a similar history to patient 9. He presented bilateral moderate SN HL, unilateral kidney dysplasia and cystic dilatation of the rete testis of the right testis. He had a bilateral conventional BTE hearing aid. WES identified a pathogenic, de novo heterozygous c.431delG, p.(Gly144Alafs*51) in GATA3 . GATA3 is associated with HDR syndrome, i.e., hypoparathyroidism, SN deafness and renal dysplasia. Hypoparathyroidism can appear later in life and both patients are under endocrinological surveillance . 3.3.9. SLC17A8 Patient 10. A four-year-old-male with congenital bilateral, moderate SN HL. His maternal grandmother was also known for HL, without further information. He had a bilateral conventional BTE hearing aid. WES identified a likely pathogenic, heterozygous mutation (c.634C > A, p.(Pro212Thr)) in SLC17A8 inherited from his mother with normal audition. SLC17A8 is known to be associated with highly variable non-syndromic HL. Affected male members are reported with earlier onset and a more severe phenotype . 3.3.10. LOXHD1 Patient 11. An eight-year-old female with bilateral moderate SN HL and no relevant family history. She had a bilateral conventional BTE hearing aid. WES identified a pathogenic homozygous mutation (c.3061 + 1G > A, p.?) in LOXHD1 . Parental segregation was confirmed in the mother, but was not available for the father. LOXHD1 is associated with autosomal recessive bilateral SN HL, which can be progressive. Mutations in this gene have also been recently associated with late-onset Fuchs corneal dystrophy and therefore ophthalmological surveillance was recommended . 3.3.11. OTOA Patient 17. A three-year-old female with bilateral mild-to-moderate congenital SN HL born to consanguineous parents without any relevant family history. WES identified a paternal gene conversion between OTOA gene and OTOAP1 pseudogene and a maternal deletion of OTOA . Gene conversion between OTOA and its pseudogene OTOAP 1 is a known mechanism leading to the generation of a pathogenic OTOA allele . Exons 20 to 28 of the OTOA gene are located in a 68 kb region that was segmentally duplicated during evolution and resulted in the emergence of the OTOAP1 pseudogene located 820 kb upstream of OTOA. Therefore, these genes share a high level of homology (>99%). In our patient, gene conversion occurred between the exon 20–21 of the OTOA gene replaced by exon 1 and 2 of the pseudogene OTPAP1 . This gene conversion is expected to result in a premature stop codon that would either result in a truncated protein or an absence of protein through mRNA nonsense-mediated decay. These findings were confirmed by polymerase chain reaction/Sanger sequencing and MLPA. Family segregation confirmed the inheritance of each allele from one of the parents. OTOA is related to autosomal recessive non-syndromic HL . 3.3.12. WSF1 Patient 18. A two-year-old female who suffered from congenital bilateral moderate SN HL. HL was progressive and she had profound bilateral deafness. No relevant family history was noted. She benefited from sequential bilateral cochlear implantation. WES identified a pathogenic de novo heterozygous mutation (c.2051C > T, p.(Ala684Val)) in WSF1 . WSF1 is associated with an autosomal dominant Wolfram-like syndrome, which associates progressive HL, optic atrophy and, later, diabetes mellitus. After diagnosis, the patient had an ophthalmological evaluation that revealed partial bilateral optic atrophy. Endocrinological follow-up was organized . 3.3.13. STRC Patient 12. A seven-year-old male was diagnosed with mild bilateral SN HL when he started elementary school. He had a conventional binaural BTE hearing aid. MRI was normal, and there was no relevant family history. WES identified a compound heterozygous CKMT1B, STRC, CATSPER2 deletion, confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and (c.4917_4918delACinsCT, p.(Leu1640Phe)) in STRC . Family segregation was confirmed. Patient 13. A seven-year-old male with postlingual, moderate bilateral SN HL. The audiogram showed a “U-curve”. He had a conventional binaural BTE hearing aid. MRI was normal with no relevant family history. WES identified a compound heterozygous CKMT1B, STRC deletion and a CKMT1B , STRC , CATSPER2 deletion). This result was confirmed by MLPA (chr15:g.[(43851199_43890333)_(43897676_43924279)del];[(43851199_43890333)_(43940820_44038794)del] and family segregation. Patient 14. A nine-year-old female with moderate prelingual SN bilateral HL and no relevant family history. WES identified a compound heterozygous c.4425G > C, p.(Trp1475Cys) in STRC and CKMT1B, STRC, CATSPER2 deletion confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 15. A nine-year-old male with congenital moderate bilateral SN HL and no relevant family history. He had a conventional binaural BTE hearing aid. WES identified a homozygous deletion of CKMT1B, STRC, CATSPER2 confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 22. A 14-year-old female with moderate, prelingual bilateral SN HL. Her father suffered from moderate bilateral HL and her uncle suffered from unilateral moderate HL. She had a conventional BTE hearing aid since the age of one year. WES identified a heterozygous deletion of CKMT1B and STRC and probably CATSPER2 confirmed by MLPA (chr15:g.(43′851′199_43′890′391)_(?_44′038′820)del), as well as a heterozygous (c.4837G > T,p.(Glu1613*)) mutation in the STRC gene. Family segregation confirmed the inheritance of each allele from a healthy parent. STRC alterations cause autosomal recessive nonsyndromic SN deafness type-16. HL starts usually during childhood (birth to the age of 10 years). Contiguous gene deletion syndrome on chromosome 15q15.3, including STRC and CATSPER2 , as identified in patient 15, is responsible for a deafness-infertility syndrome. This syndrome is characterized by early-onset deafness in both males and females and associated with infertility in males . 3.3.14. POU4F3 and OPA1 Patient 21. A six-year-old female with a moderate bilateral SN HL detected at elementary school screening. Her audiogram was “spoon”-shaped. She had a conventional binaural BTE hearing aid. Family history was positive on her father’s side (paternal uncle, grandfather and great-grandfather affected with adult onset moderate HL). Her father had encountered difficulties discriminating sounds since childhood, but he reported his audiological evaluation as normal at 18 years of age. WES revealed a heterozygous pathogenic mutation (c.502del, p.(Ala168Profs*36)) in POU4F3 inherited from the father and a likely pathogenic, heterozygous de novo mutation (c.1118C > G, p.(Ser373Cys)) in OPA1 , never reported previously. POU4F3 is associated with autosomal dominant deafness type 15, a progressive form of nonsyndromic SN HL. Onset is postlingual, usually between the second and sixth decades of life. Intrafamilial variability has been reported . OPA1 is related with optic atrophy and optic atrophy plus syndrome . Multisystem neurological disease involving optic atrophy, deafness and neuromuscular complications is associated with all types of mutations. However, optic atrophy plus syndrome is more frequent with a missense mutation in OPA1 gene, while classic optic atrophy is mostly associated with deletion. Both groups of mutations are most frequently observed in the GTPase domain of the OPA1 gene . The mutation carried by our patient was located in this GTPpase domain. Although she had a normal ophtalmological and neurological evaluation, close follow-up was recommended as the signs and symptoms can be progressive and highly variable, with a mean onset around 10 years of age . 3.3.15. COL11A1 Patient 23. A three-year-old male was referred to an ENT specialist due to delayed speech. Audiograms showed mild prelingual SN bilateral HL. He benefited from neuropediatric evaluation because of interaction difficulties, excessive shyness and motor coordination problems. The family history revealed that his two brothers, his mother, two of his maternal aunts and his grandmother suffer from HL. No one was wearing hearing aids. Age of onset was highly variable (35 years for his mother and 10–15 years for his brothers). HL appeared to be isolated in the family and, in particular, there was no history of cleft palate. WES revealed a large heterozygous pathogenic deletion of COL11A1 (chr1:g.[(103388956_103400026)_(104094395_?)del]) inherited from his mother and present in both of the patient‘s brothers. The patient and his brothers underwent ophtalmological investigations, which were completely normal. Patient 26. A 15-year-old male with congenital bilateral moderate SN HL. His grandmother had a very late onset history of HL. Hearing aids were added sequentially (right ear at 2 years and left ear at 4 years). WES revealed a heterozygous likely pathogenic deletion of a splicing site in COL11A1 (c.4519-2Adel,p.?) inherited from his apparently asymptomatic mother who never benefited from an audiological examination. Close ophtalmological follow-up was recommended, even if HL seemed isolated. 3.3.16. COL11A1 and SMAD3 Patient 24. An 18-year-old female with congenital bilateral moderate SN HL who had been wearing hearing aids since the age of 4 years. She was born with a cleft palate and was highly myopic. She reported bruising easily and chronic knee pain related to recurrent patella luxation. The family history revealed that her mother needed surgical correction of cervical vertebrae, complicated by severe hemorrhage, but without further information. She had two healthy brothers. All of her mother’s pregnancies were uncomplicated. WES revealed a heterozygous pathogenic de novo variant in COL11A1 (c.4547G > T, p.(Gly1516Val)) and a likely pathogenic heterozygous variant in SMAD3 (c.3G > A (p.Met1?)) inherited from her mother. COl11A1 is associated with Marshall syndrome and Stickler syndrome type II or autosomal dominant deafness type 37. Marshal syndrome is characterized by dysmorphic signs (microcretrognathia, long philtrum, robin sequence) and key clinical features, such as cleft palate, myopia and SN HL. Main complications are vitreoretinal degeneration, glaucoma and retinal detachment. Stickler syndrome is characterized by dysmorphic signs (micrognathia and Pierre Robin sequence with cleft palate), SN HL, early onset myopia, glaucoma and a risk of retinal detachment. Joint hypermobility is common and associated with an increased risk of early onset arthrosis . Patients 23 and 26 were affected only by HL, while patient 24 displayed clear signs of Stickler syndrome. It has recently been highlighted that COL11A1 is associated with nonsyndromic HL and should be included in nonsyndromic HL gene panels . SMAD3 is associated with Loeys-Dietz syndrome type 3, characterized by cardiac malformation (mitral valve prolapse, aortic insufficiency, left ventricular hypertrophy) and an increased risk of aortic aneurysm and dissection, arterial aneurysm and arterial tortuosity, pectus deformity, and an increased risk of internal organ ruptures . Patient 24 underwent extensive vascular investigations and had no signs of vascular involvement or characteristic Loeys-Dietz dysmorphic features. However, she did describe bruising easily, had fair skin and long fingers. Due to her young age, we proposed to introduce a regular vascular follow-up. 3.3.17. TRIOBP Patient 25. A five-year-old female was referred to the ENT department for profound, seemingly isolated and congenital HL. No relevant family history was noted, but her parents were consanguineous. WES revealed homozygous pathogenic duplication in the TRIOBP gene (c.3214dup, p.ArgArg1072Profs*12). Family segregation confirmed a bi-allelic inheritance. The TRIOBP gene is associated with nonsyndromic autosomal recessive HL, which is usually bilateral and prelingual . 3.3.18. TMPRSS3 Patient 27. A nine-year-old female was referred to the ENT department due to severe bilateral progressive HL. She had been wearing regular BTE hearing aids since the age of 8 years with poor results. The family history was unremarkable. WES revealed compound heterozygosity in the TMPRSS3 gene (c.400 A > T, p.(Lys134*); c.646C > T, p.(Arg216Cys)); both variants were reported as pathogenic. Family segregation confirmed that each variant was inherited from a healthy parent. Patient 29. A three-year-old male was referred to the ENT department due to speech delay. OAEs at birth were reported normal, but a hearing evaluation revealed severe bilateral “ski slope” pattern SN HL. He had a sister whose audition was in the normal range. His father reported unilateral HL since childhood. Hearing aids were implemented since the diagnosis. WES revealed compound heterozygosity in the TMPRSS3 gene (c.916G > A, p.(Ala306Thr); (c.749delT,p.(Leu250Argfs*25)); both variants were reported as pathogenic. Family segregation confirmed the bi-allelic inheritance of the variants. The TMPRSS3 gene is associated with nonsyndromic recessive SN HL type 8/10. HL can be pre- or post- lingual, depending on the type of mutation, and HL has been described as isolated . 3.3.19. COL4A3 Patient 28. A 10-year-old male with mild bilateral SN HL. The family history revealed that both his father and grandfather suffered from mild HL, but progressive for his father. He had one healthy brother. We could not find an audiological examination (OEA) performed at birth, but screening at the age of 5 years was pathological. WES revealed a heterozygous, likely pathogenic mutation in COL4A3 inherited from his father (c.4826G > A, p.(Arg1609Gln)). COL4A3 mutations are associated with several phenotypes, such as Alport syndrome that associates renal failure, variable HL (which can be of late onset) and ocular involvement, including cataract and retinopathies. Heterozygous mutation in the COL4A3 gene responsible for autosomal dominant Alport syndrome might also generate isolated HL or HL with ocular involvement in some carriers . Neither our patient nor his father showed any sign of kidney or ocular involvement, but close follow-up was organized as phenotypic variability has been described, even in the same family . 3.3.20. MARVELD2 Patient 31. A 16-year-old male presenting with severe bilateral HL. He wore regular BTE hearing aids. In addition, he suffered from cholestatic hepatopathy since infancy, as well as hyperactivity and impaired concentration. The family history was unremarkable. His parents of European descent were not related. WES revealed a homozygous pathogenic mutation in the MARVELD2 gene (c.1331 + 2T > C, p.. MARVELD2 is associated with autosomal recessive nonsyndromic HL type 49 more often found in the East Caucasian population. No association with hepatic involvement in this patient could be demonstrated. However, MARVELD2 is a tight junction protein and therefore we might reconsider this statement in the years to come . Indeed we might stretch out that a defect in tight junction protein could easily affect the integrity of the epithelia, and therefore, the function of the organ. 3.3.21. MYO15A Patient 32. A two-year-old male with profound bilateral congenital HL. At birth, he presented with hypothyroidism and an atrial septal defect. He benefited from a sequential bilateral cochlear implant (Nov 2020, right side; Feb 2021, left side). He had one older sister without any hearing impairment. His paternal grandfather was reported with mild HL and his great-granduncle with very early HL (without further information). His parents have normal audition. WES revealed a homozygous variant in the MYO15A gene. Family segregation confirmed inheritance of each of the variants from a healthy parent. MYO15A gene is associated with autosomic recessive severe nonsyndromic HL type 3. HL is described as congenital and severe to profound . 3.3.22. NF2 Patient 62. A 40-year-old male presented with unilateral progressive HL and bilateral tinnitus. MRI revealed bilateral schwannoma, which raised the hypothesis of neurofibromatosis type 2. Exome sequencing revealed a heterozygous pathogenic variant in the NF2 gene (c.1579G > T, p.(Glu527*)), which confirmed the diagnosis. A NF2 -related syndrome is characterized by the progressive appearance of vestibular schwannomas, which are usually bilateral and responsible for HL, and may or may not be associated with tinnitus. Schwannomas may also develop on other cranial, spinal or peripheric nerves with related symptoms. Some patients may develop intracranial or intraspinal meningiomas or a malignant tumor of the nervous system (ependymoma). Ophtalmological involvement, including cataract and loss of visual acuity, is quite common. Seventy percent of NF2 patients have cutaneous tumors . 3.3.23. COCH Patient 63. A 66-year-old female with bilateral severe HL since 30 years. Lately, she noticed worsening of hearing impairment and bilateral vestibular areflexia. She wore regular hearing aids. Her sister had bilateral HL and wore hearing aids. Her brother and father displayed late-onset HL, but less severe. Her son is being evaluated for HL. WES revealed a heterozygous pathogenic mutation in the COCH gene (c.341T > C, p.Leu114Pro). The mutation associated with the COCH gene leads to progressive bilateral HL with autosomal dominant transmission. Vestibular involvement is frequent. Onset is described from 20 to 60 years of age . 3.4. Variant/s of Unknown Significance (VUS) Patient 1 (described previously). In addition to the COL4A5 variant, she was found to carry two missense variants in compound heterozygosity in COL11A2 classified as VUS. Mutations in COL11A2 are known to be responsible for autosomal dominant and autosomal recessive nonprogressive profound, congenital or prelingual HL. This gene is also associated with a syndromic HL, otospondylomegaepiphyseal dysplasia . To date, our patient does not present any other signs and symptoms suggestive of a collagenopathy. Patient 33. A 12-year-old male with prelingual bilateral severe SN HL. His younger brother was also affected. They had no other signs and symptoms. WES identified compound heterozygous mutations (c.641G > A, p.(Arg214His)) and (c.643T > G, p.(Trp215Gly)) classified as VUS in TBC1D24 . Both variants were also present in the affected brother and family segregation was confirmed. Patient 64. A 33-year-old female referred to the ENT department due to bilateral HL since the age of 18 years, with moderate, but progressive bilateral SN HL. The family history revealed that her father suffered from late-onset unilateral HL. She was born to a consanguineous union. WES revealed a heterogygous mutation in TBC1D24 (c.418 C > G p.(Leu140Val)) classified as a VUS. TBC1D 24 has been described with autosomal recessive and autosomal dominant HL. Autosomal recessive TBC1D24 -related syndromes show a marked phenotypic pleiotropy with multisystem involvement. The severity spectrum ranges from isolated deafness to benign myoclonic epilepsy restricted to childhood with complete seizure control and normal intellect, to early-onset epileptic encephalopathy with severe developmental delay and early death. There is no distinct phenotypic correlation with the pathogenic variant type or location as yet, but patterns are emerging . Autosomal dominant TBC1D24 -related syndromes are marked by adult onset and progressive HL . For both families of patients 33 and 64, the variants did not fulfill ACMG criteria and were not counted as positive results . Patient 42. An 11-year old female with congenital bilateral non-syndromic SN HL. Neonatal hearing screening was abnormal and she underwent a cochlear implant. The family history revealed that she was born to a consanguineous union (2nd degree cousins), but with no history of HL. Her parents were of Egyptian origin. WES revealed a heterozygous variant in the CDH23 gene. This latter gene is associated with autosomal recessive HL, autosomal dominant Usher syndrome 1D or autosomal recessive/digenic Usher syndrome . A family segregation study was not possible and therefore pathogenicity through segregation could not be concluded. As ACMG criteria were not met, the variant was classified as a VUS , The patient will be re-evaluated on a regular basis. Patient 46. A 14-year-old male with HL detected at the age of 3 years who wore regular BTE hearing aids since then. At age 12, he benefited from a cochlear implant. He was diagnosed with attention deficit hyperactivity disorder and benefits from special support at school. The family history was unremarkable. WES revealed two variants in PCDH15 (c.4885delA, p.S1629fs & c.964T > A, p.Ser322Thr) and two variants in USH2A (c.13133C > T, p.Pro437Leu & c.6800C > T, p.Pro2267Leu). Family segregation confirmed the position in cis of all variants inherited from the healthy mother. All variants were classified as VUS. PCDH15 gene is associated with autosomal recessive HL type 23 and autosomal recessive/digenic Usher syndrome . USH2A gene is associated with autosomal recessive Usher syndrome type 2A and retinitis pigmentosa 38 . Patient 51. A 10-year-old female presenting with auditory neuropathy resulting in moderate HL. An attempt with hearing aids was not successful. The family history was unremarkable. WES revealed a heterozygous variant in the OSBPL2 gene (c.852_854delTATinsATG, p.(Phe284_Met285delinsLeuTrp)) inherited from the healthy mother. Therefore, this variant did not fulfill ACMG criteria and was not considered as a positive result . The OSBPL2 gene is responsible for autosomal dominant HL with high variability in terms of age of onset (5–32 years) and expressivity; HL is usually progressive . 3.5. Molecular Results through Direct Sequencing of GJB2-GJB6 Patient 71. A three-year-old male with congenital severe bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). Patient 72. An eight-year old male with prelingual severe bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). Family segregation was confirmed. Patient 73. A five-year-old female with moderate congenital SN HL. Her mother had moderate-to-high frequency HL. She wore regular bilateral BTE hearing aids. Direct sequencing of the GJB2 and GJB6 locus revealed a compound heterozygous mutation in the GJB2 gene (c.59T > C, p.Ile20Thr and c.109G > A,p.Val37Ile). Family segregation was confirmed. Patient 74. A 6 year-old male with prelingual moderate bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 locus revealed a homozygous mutation in the GJB2 gene (c.269T > C, p.(Leu90Pro). Family segregation was confirmed. Patient 75. A 20-year old female with congenital severe HL. She wore regular BTE bilateral hearing aids and was referred to the genetics department because of her wish to conceive a child. She had two sisters with severe bilateral HL who also wore hearing aids. She had also one sister and one brother without any hearing impairment. Direct sequencing of the GJB2 locus revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). 3.1.1. Children Sixty-one children (female, 26; male, 35; age range, 13 months to 18 years) and nine adults (female, 6; male, 3; age range, 34–78 years) benefited from a molecular analysis for HL. Most presented with SN HL (52 children (85.2%); 8 adults (88.9%)). Six children (9.8%) had mixed (conductive and SN) HL, two children (3.3%) and one adult (11.1%) had transmission HL, and one child had right SN HL and left mixed HL. HL severity among children was as follows: eight had mild HL (13.1%); 35 had moderate HL (57.4%); 10 had severe HL (16.4%); eight had profound HL (13.1%); and 6 showed progressive HL (9.8%). (Severity was defined as mild: 26–40 db hearing loss; moderate: 41–70 db hearing loss; severe: 71–90 db hearing loss and profound >91 db hearing loss). The majority of children (53/61 (86.8%)) had bilateral HL. Congenital HL was diagnosed in 36/61 (59%) cases. Eleven (18%) children were diagnosed with prelingual HL (defined as identified at ≤1 year of age) and the remainder (14 patients (22.9%)) had postlingual HL (defined as identified at >1 year of age). Forty-one had no family history (67.2%) and only four probands (6.5%) were born from consanguineous parents. Of the 41 patients who underwent a CT scan and/or MRI investigation, 15 (36.6%) had middle/inner ear malformations. Twenty-one patients (34.4%) had other signs and symptoms in addition to HL . 3.1.2. Adults Four adults had moderate HL (44.4%), four had severe HL (44.4%), and one had profound HL (11.1%); all cases showed progressive HL . Three patients experienced a violent worsening of HL associated with upper airway infection for one patient and vertigo for the second case. The third patient did not recall any infection or vertigo associated with the onset of worsening of HL. All had bilateral HL at the time of consultation, but three patients had marked asymmetry at diagnosis. All were diagnosed with postlingual HL, but the age of onset was highly variable (8 to 65 years). Five patients had a family history (55.6%), three patients had no family history (33.3%), and one patient was adopted (no family history available). Two probands (22.2%) were born from consanguineous parents. Seven (77.8%) were investigated through CT scan and/or MRI, which revealed that four (44.4%) had middle/inner ear malformations. Interestingly, six of nine (66.7%) patients had additional symptoms to HL, with vertigo being the most frequent . 3.1.3. Patients Identified through Direct Sequencing of GJB2/GJB6 All four children that were diagnosed through direct sequencing of GJB2/6 presented with SN, bilateral HL. Three had congenital HL and one was diagnosed with pre lingual HL. Three had severe HL and one had moderate HL. None were known with any family history or consanguinity. One adult was diagnosed through direct sequencing of GJB2/6. He displayed congenital, SN, bilateral profound HL. Three of the patient were males and two were females, out of wich one was an adult. Sixty-one children (female, 26; male, 35; age range, 13 months to 18 years) and nine adults (female, 6; male, 3; age range, 34–78 years) benefited from a molecular analysis for HL. Most presented with SN HL (52 children (85.2%); 8 adults (88.9%)). Six children (9.8%) had mixed (conductive and SN) HL, two children (3.3%) and one adult (11.1%) had transmission HL, and one child had right SN HL and left mixed HL. HL severity among children was as follows: eight had mild HL (13.1%); 35 had moderate HL (57.4%); 10 had severe HL (16.4%); eight had profound HL (13.1%); and 6 showed progressive HL (9.8%). (Severity was defined as mild: 26–40 db hearing loss; moderate: 41–70 db hearing loss; severe: 71–90 db hearing loss and profound >91 db hearing loss). The majority of children (53/61 (86.8%)) had bilateral HL. Congenital HL was diagnosed in 36/61 (59%) cases. Eleven (18%) children were diagnosed with prelingual HL (defined as identified at ≤1 year of age) and the remainder (14 patients (22.9%)) had postlingual HL (defined as identified at >1 year of age). Forty-one had no family history (67.2%) and only four probands (6.5%) were born from consanguineous parents. Of the 41 patients who underwent a CT scan and/or MRI investigation, 15 (36.6%) had middle/inner ear malformations. Twenty-one patients (34.4%) had other signs and symptoms in addition to HL . Four adults had moderate HL (44.4%), four had severe HL (44.4%), and one had profound HL (11.1%); all cases showed progressive HL . Three patients experienced a violent worsening of HL associated with upper airway infection for one patient and vertigo for the second case. The third patient did not recall any infection or vertigo associated with the onset of worsening of HL. All had bilateral HL at the time of consultation, but three patients had marked asymmetry at diagnosis. All were diagnosed with postlingual HL, but the age of onset was highly variable (8 to 65 years). Five patients had a family history (55.6%), three patients had no family history (33.3%), and one patient was adopted (no family history available). Two probands (22.2%) were born from consanguineous parents. Seven (77.8%) were investigated through CT scan and/or MRI, which revealed that four (44.4%) had middle/inner ear malformations. Interestingly, six of nine (66.7%) patients had additional symptoms to HL, with vertigo being the most frequent . All four children that were diagnosed through direct sequencing of GJB2/6 presented with SN, bilateral HL. Three had congenital HL and one was diagnosed with pre lingual HL. Three had severe HL and one had moderate HL. None were known with any family history or consanguinity. One adult was diagnosed through direct sequencing of GJB2/6. He displayed congenital, SN, bilateral profound HL. Three of the patient were males and two were females, out of wich one was an adult. 3.2.1. Children Among the 61 cases investigated through WES, molecular confirmation was performed in 32 probands (52.5%) with the involvement of 22 different genes . Five patients (patients 1, 33, 42, 46,51), showed a variant of unknown significance (VUS) . Of note, patient 1 had a likely pathogenic de novo mutation in COL4A5 and two missense variants in compound heterozygosity in COL11A2 classified as VUS. . These variants did not fulfill ACMG criteria and were not counted or reported as positive, even if highly concordant with the patient’s phenotype. This was mostly due to the inherited status of the variant from a parent with normal audition or because segregation analysis was not possible. Seventeen of 32 (53.1%) patients had autosomal recessive inheritance patterns; 14 (42%) had an autosomal dominant disorder, and one case had X-linked HL . Among the 14 autosomal dominant cases, nine were reported de novo, three were inherited from a healthy parent, and one was inherited from an affected parent. One patient (# 21) had one HL variant ( POU4F3 ) inherited from an affected father and a de novo incidental finding in OPA1 . Another patient (# 24) had a de novo causative variant in COL11A1 and an inherited SMAD3 variant from an affected mother . Of the 32 children with a positive molecular diagnostic test, 17 (53.2%) had mutations in non-syndromic HL-associated genes, of which 14 were autosomal recessive (43.8%). Fifteen (46.9%) cases had pathogenic variants in syndromic HL-associated genes, of which 11 were transmitted in an autosomal dominant pattern (32.4%). Seven patients were counted as syndromic, but did not display any other sign apart from HL at the time of diagnosis . The most common HL causative genes were STRC (5 cases), ACTG 1 (3 cases), COL11A1 (3 cases), and GJB2 (3 cases) . Among these, only COL11 A1 is responsible for both syndromic and non-syndromic HL. Four additional children were diagnosed with a GJB2 mutation through direct sequencing . Three patients were compound heterozygotes for a STRC point mutation and carried a STRC deletion on the other allele. Two other patients showed bi-allelic deletion of the STRC gene. One case was caused by a heterozygous gene conversion on one allele and CNV on the other; one patient showed heterozygous deletion of COL11A1 . A total of seven cases were caused by CNV (21.9%). 3.2.2. Adults Among the nine adults that underwent molecular investigations, two had a molecular diagnosis . One was diagnosed with neurofibromatosis type 2 and the other displayed a variant in the COCH gene. Both patients followed an autosomal dominant inheritance pattern. No family segregation was available and therefore it was not possible to conclude on a de novo or inherited status of these variants . One patient was diagnosed with GJB2 variants by direct sequencing ( (# 75)) and another showed a rare missense variant in the TBC1D24 gene ( (# 64)). The latter variant was classified as a VUS based on ACMG criteria. Among the 61 cases investigated through WES, molecular confirmation was performed in 32 probands (52.5%) with the involvement of 22 different genes . Five patients (patients 1, 33, 42, 46,51), showed a variant of unknown significance (VUS) . Of note, patient 1 had a likely pathogenic de novo mutation in COL4A5 and two missense variants in compound heterozygosity in COL11A2 classified as VUS. . These variants did not fulfill ACMG criteria and were not counted or reported as positive, even if highly concordant with the patient’s phenotype. This was mostly due to the inherited status of the variant from a parent with normal audition or because segregation analysis was not possible. Seventeen of 32 (53.1%) patients had autosomal recessive inheritance patterns; 14 (42%) had an autosomal dominant disorder, and one case had X-linked HL . Among the 14 autosomal dominant cases, nine were reported de novo, three were inherited from a healthy parent, and one was inherited from an affected parent. One patient (# 21) had one HL variant ( POU4F3 ) inherited from an affected father and a de novo incidental finding in OPA1 . Another patient (# 24) had a de novo causative variant in COL11A1 and an inherited SMAD3 variant from an affected mother . Of the 32 children with a positive molecular diagnostic test, 17 (53.2%) had mutations in non-syndromic HL-associated genes, of which 14 were autosomal recessive (43.8%). Fifteen (46.9%) cases had pathogenic variants in syndromic HL-associated genes, of which 11 were transmitted in an autosomal dominant pattern (32.4%). Seven patients were counted as syndromic, but did not display any other sign apart from HL at the time of diagnosis . The most common HL causative genes were STRC (5 cases), ACTG 1 (3 cases), COL11A1 (3 cases), and GJB2 (3 cases) . Among these, only COL11 A1 is responsible for both syndromic and non-syndromic HL. Four additional children were diagnosed with a GJB2 mutation through direct sequencing . Three patients were compound heterozygotes for a STRC point mutation and carried a STRC deletion on the other allele. Two other patients showed bi-allelic deletion of the STRC gene. One case was caused by a heterozygous gene conversion on one allele and CNV on the other; one patient showed heterozygous deletion of COL11A1 . A total of seven cases were caused by CNV (21.9%). Among the nine adults that underwent molecular investigations, two had a molecular diagnosis . One was diagnosed with neurofibromatosis type 2 and the other displayed a variant in the COCH gene. Both patients followed an autosomal dominant inheritance pattern. No family segregation was available and therefore it was not possible to conclude on a de novo or inherited status of these variants . One patient was diagnosed with GJB2 variants by direct sequencing ( (# 75)) and another showed a rare missense variant in the TBC1D24 gene ( (# 64)). The latter variant was classified as a VUS based on ACMG criteria. 3.3.1. COL4A5 Patient 1. A six-year-old female presented with language delay, mild left and moderate right HL, associated with cochlear malformation. She had a conventional binaural behind-the-ear (BTE) hearing aid. No relevant family history was reported. WES identified a heterozygous likely pathogenic de novo missense variant in COL4A5 (c.1525G > C, p.(Gly509Arg)). The COL4A5 gene is associated with X-linked Alport syndrome characterized by SN HL, as well as ocular and kidney involvement. Females with COL4A5 mutation can display HL, but it is usually less frequent and tends to occur in later life . Nevertheless, a nephrology follow-up was organized, given the risk of renal complications in these patients . WES also identified two missense variants in COL11A2 classified as VUS and described below. 3.3.2. USH1G Patient 2. A 21-month-old female was diagnosed with profound bilateral SN HL with no relevant family history. WES identified a homozygous missense variant c.1373A > T, p.(Asp458Val) in USH1G and parental segregation was confirmed. USH1G is responsible for Usher syndrome type 1, an autosomal recessive condition that associates a congenital, profound SN HL, vestibular areflexia, and adolescent-onset retinitis pigmentosa. She benefited from a sequential bilateral cochlear implantation and a routine ophthalmologic evaluation . The ophthalmological check-up revealed a pathological electroretinogram and close follow-up is ongoing. 3.3.3. GJB2 Patient 3. A four-year-old male was diagnosed with bilateral, moderate, congenital SN HL, with no relevant family history. He had a conventional binaural BTE hearing aid. WES revealed a deletion c.35delG and a heterozygous missense c.101T > C, p.(Met34Thr) variant of GJB2 . Parental segregation confirmed that mutations were in trans. Patient 8. An eight-year-old male with congenital moderate SN HL, a conventional binaural BTE hearing aid and no relevant family history. An inner ear CT scan was normal. WES identified a compound heterozygous c.35del, p.(Gly12Valfs*2); c.139G > T, p.(Glu47*) in GJB2 . Parental segregation confirmed that mutations were in trans. Patient 20. A two-year-old boy with congenital severe, bilateral SN HL. He also presented palmoplantar keratoderma. Family history was unremarkable. An inner ear CT scan showed dilatation of the internal auditory canals and inner ear malformation. He benefited from a sequential bilateral cochlear implantation. WES revealed a pathogenic heterozygous de novo c. 223C > T, p.(Arg75Trp) in GJB2 . Missense in this residue is associated with autosomal dominant HL and palmoplantar keratoderma . 3.3.4. SIX1 Patient 4. An 11-year-old male with moderate SN HL on the right side and mixed, profound HL on the left side, and no family history of HL. He had bilateral inner ear malformations revealed by CT scan and left side congenital cholesteatoma. At physical examination, he presented with a pre-auricular pit on the left side. He had a conventional binaural BTE hearing aid. WES revealed a heterozygous de novo SIX1 pathogenic missense mutation (c.386A > C, p.(Tyr129Ser)). SIX1 is associated with branchiootorenal syndrome, which is characterized by branchial arch anomalies, hearing impairment (malformations of the auricle with pre-auricular pits and conductive or SN hearing impairment), and renal malformations . Follow-up was completed with renal ultrasonography, which was normal. 3.3.5. LARS2 Patient 5. An eight-year-old female with postlingual profound, bilateral, SN HL and no relevant family history. She benefited from a cochlear implantation on the right side. WES showed two compound heterozygous mutations (c.457A > C, p.(Asn153His) and c.1565C > A, p.(Thr522Asn)) in LARS2. Mutations were classified as likely pathogenic and pathogenic respectively. LARS2 is associated with Perrault syndrome, which is characterized by SN HL in males and females and ovarian dysfunction in females. Pubertal development will be monitored in the future in order to induce puberty and permit normal bone mineralization. In the case of ovarian insufficiency, oocyte cryopreservation will be considered . Follow-up was completed with ovarian ultrasonography and an endocrinological follow-up was organized. 3.3.6. ILDR1 Patient 6. A 10-year-old male with congenital, profound, bilateral SN HL. He benefited from a unilateral cochlear implantation. Apart from being born into a consanguineous union, he had no other relevant family history. WES identified a homozygous nonsense mutation (c.942C > A, p.(Cys314*)) in ILDR1 , classified as pathogenic. Mutations in this gene are known to cause a prelingual, nonprogressive, nonsyndromic form of SN deafness . 3.3.7. ACTG1 Patient 7. A five-year-old male with postlingual unilateral (left) mild mixed HL. He had no relevant family history and no other health problems. The CT scan showed an uncus malformation on both sides, but normal inner ears. WES identified a heterozygous, likely pathogenic de novo mutation, c.440G > A, p.(Arg147His) in the ACTG1 gene. Patient 16. A 15-year-old female with postlingual, bilateral, moderate SN HL. She had a binaural conventional BTE hearing aid. No relevant family history was noted. WES revealed a heterozygous, likely pathogenic de novo mutation, c.826G > A, p.(Glu276Lys) in ACTG1 . Patient 30. A 17-year-old male with an initial mild SN postlingual HL that had progressed to moderate HL. Both his paternal grandmother and his maternal grandfather showed late onset HL. He had a bilateral conventional BTE hearing aid since the age of 16 years. WES identified a heterozygous, pathogenic de novo mutation, c.830C > T,p.(Thr277Ile) in ACT G1. ACTG1 variants are responsible for DFNA20/DFNA26, usually associated with postlingual and progressive SN HL and a type 2 Baraitser-Winter syndrome. Our patients did not have any syndromic features to date and thus we considered that these mutations were related to autosomal dominant deafness 20/26 (MIM: 604717) . 3.3.8. GATA3 Patient 9. A five-year-old male with congenital moderate SN HL and bilateral renal cysts. He had a binaural conventional BTE hearing aid. No relevant family history was noted. WES identified a pathogenic heterozygous de novo mutation c.778 + 1G > A, p.?, in GATA3 . Patient 19. An 18-month-old male with a similar history to patient 9. He presented bilateral moderate SN HL, unilateral kidney dysplasia and cystic dilatation of the rete testis of the right testis. He had a bilateral conventional BTE hearing aid. WES identified a pathogenic, de novo heterozygous c.431delG, p.(Gly144Alafs*51) in GATA3 . GATA3 is associated with HDR syndrome, i.e., hypoparathyroidism, SN deafness and renal dysplasia. Hypoparathyroidism can appear later in life and both patients are under endocrinological surveillance . 3.3.9. SLC17A8 Patient 10. A four-year-old-male with congenital bilateral, moderate SN HL. His maternal grandmother was also known for HL, without further information. He had a bilateral conventional BTE hearing aid. WES identified a likely pathogenic, heterozygous mutation (c.634C > A, p.(Pro212Thr)) in SLC17A8 inherited from his mother with normal audition. SLC17A8 is known to be associated with highly variable non-syndromic HL. Affected male members are reported with earlier onset and a more severe phenotype . 3.3.10. LOXHD1 Patient 11. An eight-year-old female with bilateral moderate SN HL and no relevant family history. She had a bilateral conventional BTE hearing aid. WES identified a pathogenic homozygous mutation (c.3061 + 1G > A, p.?) in LOXHD1 . Parental segregation was confirmed in the mother, but was not available for the father. LOXHD1 is associated with autosomal recessive bilateral SN HL, which can be progressive. Mutations in this gene have also been recently associated with late-onset Fuchs corneal dystrophy and therefore ophthalmological surveillance was recommended . 3.3.11. OTOA Patient 17. A three-year-old female with bilateral mild-to-moderate congenital SN HL born to consanguineous parents without any relevant family history. WES identified a paternal gene conversion between OTOA gene and OTOAP1 pseudogene and a maternal deletion of OTOA . Gene conversion between OTOA and its pseudogene OTOAP 1 is a known mechanism leading to the generation of a pathogenic OTOA allele . Exons 20 to 28 of the OTOA gene are located in a 68 kb region that was segmentally duplicated during evolution and resulted in the emergence of the OTOAP1 pseudogene located 820 kb upstream of OTOA. Therefore, these genes share a high level of homology (>99%). In our patient, gene conversion occurred between the exon 20–21 of the OTOA gene replaced by exon 1 and 2 of the pseudogene OTPAP1 . This gene conversion is expected to result in a premature stop codon that would either result in a truncated protein or an absence of protein through mRNA nonsense-mediated decay. These findings were confirmed by polymerase chain reaction/Sanger sequencing and MLPA. Family segregation confirmed the inheritance of each allele from one of the parents. OTOA is related to autosomal recessive non-syndromic HL . 3.3.12. WSF1 Patient 18. A two-year-old female who suffered from congenital bilateral moderate SN HL. HL was progressive and she had profound bilateral deafness. No relevant family history was noted. She benefited from sequential bilateral cochlear implantation. WES identified a pathogenic de novo heterozygous mutation (c.2051C > T, p.(Ala684Val)) in WSF1 . WSF1 is associated with an autosomal dominant Wolfram-like syndrome, which associates progressive HL, optic atrophy and, later, diabetes mellitus. After diagnosis, the patient had an ophthalmological evaluation that revealed partial bilateral optic atrophy. Endocrinological follow-up was organized . 3.3.13. STRC Patient 12. A seven-year-old male was diagnosed with mild bilateral SN HL when he started elementary school. He had a conventional binaural BTE hearing aid. MRI was normal, and there was no relevant family history. WES identified a compound heterozygous CKMT1B, STRC, CATSPER2 deletion, confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and (c.4917_4918delACinsCT, p.(Leu1640Phe)) in STRC . Family segregation was confirmed. Patient 13. A seven-year-old male with postlingual, moderate bilateral SN HL. The audiogram showed a “U-curve”. He had a conventional binaural BTE hearing aid. MRI was normal with no relevant family history. WES identified a compound heterozygous CKMT1B, STRC deletion and a CKMT1B , STRC , CATSPER2 deletion). This result was confirmed by MLPA (chr15:g.[(43851199_43890333)_(43897676_43924279)del];[(43851199_43890333)_(43940820_44038794)del] and family segregation. Patient 14. A nine-year-old female with moderate prelingual SN bilateral HL and no relevant family history. WES identified a compound heterozygous c.4425G > C, p.(Trp1475Cys) in STRC and CKMT1B, STRC, CATSPER2 deletion confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 15. A nine-year-old male with congenital moderate bilateral SN HL and no relevant family history. He had a conventional binaural BTE hearing aid. WES identified a homozygous deletion of CKMT1B, STRC, CATSPER2 confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 22. A 14-year-old female with moderate, prelingual bilateral SN HL. Her father suffered from moderate bilateral HL and her uncle suffered from unilateral moderate HL. She had a conventional BTE hearing aid since the age of one year. WES identified a heterozygous deletion of CKMT1B and STRC and probably CATSPER2 confirmed by MLPA (chr15:g.(43′851′199_43′890′391)_(?_44′038′820)del), as well as a heterozygous (c.4837G > T,p.(Glu1613*)) mutation in the STRC gene. Family segregation confirmed the inheritance of each allele from a healthy parent. STRC alterations cause autosomal recessive nonsyndromic SN deafness type-16. HL starts usually during childhood (birth to the age of 10 years). Contiguous gene deletion syndrome on chromosome 15q15.3, including STRC and CATSPER2 , as identified in patient 15, is responsible for a deafness-infertility syndrome. This syndrome is characterized by early-onset deafness in both males and females and associated with infertility in males . 3.3.14. POU4F3 and OPA1 Patient 21. A six-year-old female with a moderate bilateral SN HL detected at elementary school screening. Her audiogram was “spoon”-shaped. She had a conventional binaural BTE hearing aid. Family history was positive on her father’s side (paternal uncle, grandfather and great-grandfather affected with adult onset moderate HL). Her father had encountered difficulties discriminating sounds since childhood, but he reported his audiological evaluation as normal at 18 years of age. WES revealed a heterozygous pathogenic mutation (c.502del, p.(Ala168Profs*36)) in POU4F3 inherited from the father and a likely pathogenic, heterozygous de novo mutation (c.1118C > G, p.(Ser373Cys)) in OPA1 , never reported previously. POU4F3 is associated with autosomal dominant deafness type 15, a progressive form of nonsyndromic SN HL. Onset is postlingual, usually between the second and sixth decades of life. Intrafamilial variability has been reported . OPA1 is related with optic atrophy and optic atrophy plus syndrome . Multisystem neurological disease involving optic atrophy, deafness and neuromuscular complications is associated with all types of mutations. However, optic atrophy plus syndrome is more frequent with a missense mutation in OPA1 gene, while classic optic atrophy is mostly associated with deletion. Both groups of mutations are most frequently observed in the GTPase domain of the OPA1 gene . The mutation carried by our patient was located in this GTPpase domain. Although she had a normal ophtalmological and neurological evaluation, close follow-up was recommended as the signs and symptoms can be progressive and highly variable, with a mean onset around 10 years of age . 3.3.15. COL11A1 Patient 23. A three-year-old male was referred to an ENT specialist due to delayed speech. Audiograms showed mild prelingual SN bilateral HL. He benefited from neuropediatric evaluation because of interaction difficulties, excessive shyness and motor coordination problems. The family history revealed that his two brothers, his mother, two of his maternal aunts and his grandmother suffer from HL. No one was wearing hearing aids. Age of onset was highly variable (35 years for his mother and 10–15 years for his brothers). HL appeared to be isolated in the family and, in particular, there was no history of cleft palate. WES revealed a large heterozygous pathogenic deletion of COL11A1 (chr1:g.[(103388956_103400026)_(104094395_?)del]) inherited from his mother and present in both of the patient‘s brothers. The patient and his brothers underwent ophtalmological investigations, which were completely normal. Patient 26. A 15-year-old male with congenital bilateral moderate SN HL. His grandmother had a very late onset history of HL. Hearing aids were added sequentially (right ear at 2 years and left ear at 4 years). WES revealed a heterozygous likely pathogenic deletion of a splicing site in COL11A1 (c.4519-2Adel,p.?) inherited from his apparently asymptomatic mother who never benefited from an audiological examination. Close ophtalmological follow-up was recommended, even if HL seemed isolated. 3.3.16. COL11A1 and SMAD3 Patient 24. An 18-year-old female with congenital bilateral moderate SN HL who had been wearing hearing aids since the age of 4 years. She was born with a cleft palate and was highly myopic. She reported bruising easily and chronic knee pain related to recurrent patella luxation. The family history revealed that her mother needed surgical correction of cervical vertebrae, complicated by severe hemorrhage, but without further information. She had two healthy brothers. All of her mother’s pregnancies were uncomplicated. WES revealed a heterozygous pathogenic de novo variant in COL11A1 (c.4547G > T, p.(Gly1516Val)) and a likely pathogenic heterozygous variant in SMAD3 (c.3G > A (p.Met1?)) inherited from her mother. COl11A1 is associated with Marshall syndrome and Stickler syndrome type II or autosomal dominant deafness type 37. Marshal syndrome is characterized by dysmorphic signs (microcretrognathia, long philtrum, robin sequence) and key clinical features, such as cleft palate, myopia and SN HL. Main complications are vitreoretinal degeneration, glaucoma and retinal detachment. Stickler syndrome is characterized by dysmorphic signs (micrognathia and Pierre Robin sequence with cleft palate), SN HL, early onset myopia, glaucoma and a risk of retinal detachment. Joint hypermobility is common and associated with an increased risk of early onset arthrosis . Patients 23 and 26 were affected only by HL, while patient 24 displayed clear signs of Stickler syndrome. It has recently been highlighted that COL11A1 is associated with nonsyndromic HL and should be included in nonsyndromic HL gene panels . SMAD3 is associated with Loeys-Dietz syndrome type 3, characterized by cardiac malformation (mitral valve prolapse, aortic insufficiency, left ventricular hypertrophy) and an increased risk of aortic aneurysm and dissection, arterial aneurysm and arterial tortuosity, pectus deformity, and an increased risk of internal organ ruptures . Patient 24 underwent extensive vascular investigations and had no signs of vascular involvement or characteristic Loeys-Dietz dysmorphic features. However, she did describe bruising easily, had fair skin and long fingers. Due to her young age, we proposed to introduce a regular vascular follow-up. 3.3.17. TRIOBP Patient 25. A five-year-old female was referred to the ENT department for profound, seemingly isolated and congenital HL. No relevant family history was noted, but her parents were consanguineous. WES revealed homozygous pathogenic duplication in the TRIOBP gene (c.3214dup, p.ArgArg1072Profs*12). Family segregation confirmed a bi-allelic inheritance. The TRIOBP gene is associated with nonsyndromic autosomal recessive HL, which is usually bilateral and prelingual . 3.3.18. TMPRSS3 Patient 27. A nine-year-old female was referred to the ENT department due to severe bilateral progressive HL. She had been wearing regular BTE hearing aids since the age of 8 years with poor results. The family history was unremarkable. WES revealed compound heterozygosity in the TMPRSS3 gene (c.400 A > T, p.(Lys134*); c.646C > T, p.(Arg216Cys)); both variants were reported as pathogenic. Family segregation confirmed that each variant was inherited from a healthy parent. Patient 29. A three-year-old male was referred to the ENT department due to speech delay. OAEs at birth were reported normal, but a hearing evaluation revealed severe bilateral “ski slope” pattern SN HL. He had a sister whose audition was in the normal range. His father reported unilateral HL since childhood. Hearing aids were implemented since the diagnosis. WES revealed compound heterozygosity in the TMPRSS3 gene (c.916G > A, p.(Ala306Thr); (c.749delT,p.(Leu250Argfs*25)); both variants were reported as pathogenic. Family segregation confirmed the bi-allelic inheritance of the variants. The TMPRSS3 gene is associated with nonsyndromic recessive SN HL type 8/10. HL can be pre- or post- lingual, depending on the type of mutation, and HL has been described as isolated . 3.3.19. COL4A3 Patient 28. A 10-year-old male with mild bilateral SN HL. The family history revealed that both his father and grandfather suffered from mild HL, but progressive for his father. He had one healthy brother. We could not find an audiological examination (OEA) performed at birth, but screening at the age of 5 years was pathological. WES revealed a heterozygous, likely pathogenic mutation in COL4A3 inherited from his father (c.4826G > A, p.(Arg1609Gln)). COL4A3 mutations are associated with several phenotypes, such as Alport syndrome that associates renal failure, variable HL (which can be of late onset) and ocular involvement, including cataract and retinopathies. Heterozygous mutation in the COL4A3 gene responsible for autosomal dominant Alport syndrome might also generate isolated HL or HL with ocular involvement in some carriers . Neither our patient nor his father showed any sign of kidney or ocular involvement, but close follow-up was organized as phenotypic variability has been described, even in the same family . 3.3.20. MARVELD2 Patient 31. A 16-year-old male presenting with severe bilateral HL. He wore regular BTE hearing aids. In addition, he suffered from cholestatic hepatopathy since infancy, as well as hyperactivity and impaired concentration. The family history was unremarkable. His parents of European descent were not related. WES revealed a homozygous pathogenic mutation in the MARVELD2 gene (c.1331 + 2T > C, p.. MARVELD2 is associated with autosomal recessive nonsyndromic HL type 49 more often found in the East Caucasian population. No association with hepatic involvement in this patient could be demonstrated. However, MARVELD2 is a tight junction protein and therefore we might reconsider this statement in the years to come . Indeed we might stretch out that a defect in tight junction protein could easily affect the integrity of the epithelia, and therefore, the function of the organ. 3.3.21. MYO15A Patient 32. A two-year-old male with profound bilateral congenital HL. At birth, he presented with hypothyroidism and an atrial septal defect. He benefited from a sequential bilateral cochlear implant (Nov 2020, right side; Feb 2021, left side). He had one older sister without any hearing impairment. His paternal grandfather was reported with mild HL and his great-granduncle with very early HL (without further information). His parents have normal audition. WES revealed a homozygous variant in the MYO15A gene. Family segregation confirmed inheritance of each of the variants from a healthy parent. MYO15A gene is associated with autosomic recessive severe nonsyndromic HL type 3. HL is described as congenital and severe to profound . 3.3.22. NF2 Patient 62. A 40-year-old male presented with unilateral progressive HL and bilateral tinnitus. MRI revealed bilateral schwannoma, which raised the hypothesis of neurofibromatosis type 2. Exome sequencing revealed a heterozygous pathogenic variant in the NF2 gene (c.1579G > T, p.(Glu527*)), which confirmed the diagnosis. A NF2 -related syndrome is characterized by the progressive appearance of vestibular schwannomas, which are usually bilateral and responsible for HL, and may or may not be associated with tinnitus. Schwannomas may also develop on other cranial, spinal or peripheric nerves with related symptoms. Some patients may develop intracranial or intraspinal meningiomas or a malignant tumor of the nervous system (ependymoma). Ophtalmological involvement, including cataract and loss of visual acuity, is quite common. Seventy percent of NF2 patients have cutaneous tumors . 3.3.23. COCH Patient 63. A 66-year-old female with bilateral severe HL since 30 years. Lately, she noticed worsening of hearing impairment and bilateral vestibular areflexia. She wore regular hearing aids. Her sister had bilateral HL and wore hearing aids. Her brother and father displayed late-onset HL, but less severe. Her son is being evaluated for HL. WES revealed a heterozygous pathogenic mutation in the COCH gene (c.341T > C, p.Leu114Pro). The mutation associated with the COCH gene leads to progressive bilateral HL with autosomal dominant transmission. Vestibular involvement is frequent. Onset is described from 20 to 60 years of age . COL4A5 Patient 1. A six-year-old female presented with language delay, mild left and moderate right HL, associated with cochlear malformation. She had a conventional binaural behind-the-ear (BTE) hearing aid. No relevant family history was reported. WES identified a heterozygous likely pathogenic de novo missense variant in COL4A5 (c.1525G > C, p.(Gly509Arg)). The COL4A5 gene is associated with X-linked Alport syndrome characterized by SN HL, as well as ocular and kidney involvement. Females with COL4A5 mutation can display HL, but it is usually less frequent and tends to occur in later life . Nevertheless, a nephrology follow-up was organized, given the risk of renal complications in these patients . WES also identified two missense variants in COL11A2 classified as VUS and described below. USH1G Patient 2. A 21-month-old female was diagnosed with profound bilateral SN HL with no relevant family history. WES identified a homozygous missense variant c.1373A > T, p.(Asp458Val) in USH1G and parental segregation was confirmed. USH1G is responsible for Usher syndrome type 1, an autosomal recessive condition that associates a congenital, profound SN HL, vestibular areflexia, and adolescent-onset retinitis pigmentosa. She benefited from a sequential bilateral cochlear implantation and a routine ophthalmologic evaluation . The ophthalmological check-up revealed a pathological electroretinogram and close follow-up is ongoing. GJB2 Patient 3. A four-year-old male was diagnosed with bilateral, moderate, congenital SN HL, with no relevant family history. He had a conventional binaural BTE hearing aid. WES revealed a deletion c.35delG and a heterozygous missense c.101T > C, p.(Met34Thr) variant of GJB2 . Parental segregation confirmed that mutations were in trans. Patient 8. An eight-year-old male with congenital moderate SN HL, a conventional binaural BTE hearing aid and no relevant family history. An inner ear CT scan was normal. WES identified a compound heterozygous c.35del, p.(Gly12Valfs*2); c.139G > T, p.(Glu47*) in GJB2 . Parental segregation confirmed that mutations were in trans. Patient 20. A two-year-old boy with congenital severe, bilateral SN HL. He also presented palmoplantar keratoderma. Family history was unremarkable. An inner ear CT scan showed dilatation of the internal auditory canals and inner ear malformation. He benefited from a sequential bilateral cochlear implantation. WES revealed a pathogenic heterozygous de novo c. 223C > T, p.(Arg75Trp) in GJB2 . Missense in this residue is associated with autosomal dominant HL and palmoplantar keratoderma . SIX1 Patient 4. An 11-year-old male with moderate SN HL on the right side and mixed, profound HL on the left side, and no family history of HL. He had bilateral inner ear malformations revealed by CT scan and left side congenital cholesteatoma. At physical examination, he presented with a pre-auricular pit on the left side. He had a conventional binaural BTE hearing aid. WES revealed a heterozygous de novo SIX1 pathogenic missense mutation (c.386A > C, p.(Tyr129Ser)). SIX1 is associated with branchiootorenal syndrome, which is characterized by branchial arch anomalies, hearing impairment (malformations of the auricle with pre-auricular pits and conductive or SN hearing impairment), and renal malformations . Follow-up was completed with renal ultrasonography, which was normal. LARS2 Patient 5. An eight-year-old female with postlingual profound, bilateral, SN HL and no relevant family history. She benefited from a cochlear implantation on the right side. WES showed two compound heterozygous mutations (c.457A > C, p.(Asn153His) and c.1565C > A, p.(Thr522Asn)) in LARS2. Mutations were classified as likely pathogenic and pathogenic respectively. LARS2 is associated with Perrault syndrome, which is characterized by SN HL in males and females and ovarian dysfunction in females. Pubertal development will be monitored in the future in order to induce puberty and permit normal bone mineralization. In the case of ovarian insufficiency, oocyte cryopreservation will be considered . Follow-up was completed with ovarian ultrasonography and an endocrinological follow-up was organized. ILDR1 Patient 6. A 10-year-old male with congenital, profound, bilateral SN HL. He benefited from a unilateral cochlear implantation. Apart from being born into a consanguineous union, he had no other relevant family history. WES identified a homozygous nonsense mutation (c.942C > A, p.(Cys314*)) in ILDR1 , classified as pathogenic. Mutations in this gene are known to cause a prelingual, nonprogressive, nonsyndromic form of SN deafness . ACTG1 Patient 7. A five-year-old male with postlingual unilateral (left) mild mixed HL. He had no relevant family history and no other health problems. The CT scan showed an uncus malformation on both sides, but normal inner ears. WES identified a heterozygous, likely pathogenic de novo mutation, c.440G > A, p.(Arg147His) in the ACTG1 gene. Patient 16. A 15-year-old female with postlingual, bilateral, moderate SN HL. She had a binaural conventional BTE hearing aid. No relevant family history was noted. WES revealed a heterozygous, likely pathogenic de novo mutation, c.826G > A, p.(Glu276Lys) in ACTG1 . Patient 30. A 17-year-old male with an initial mild SN postlingual HL that had progressed to moderate HL. Both his paternal grandmother and his maternal grandfather showed late onset HL. He had a bilateral conventional BTE hearing aid since the age of 16 years. WES identified a heterozygous, pathogenic de novo mutation, c.830C > T,p.(Thr277Ile) in ACT G1. ACTG1 variants are responsible for DFNA20/DFNA26, usually associated with postlingual and progressive SN HL and a type 2 Baraitser-Winter syndrome. Our patients did not have any syndromic features to date and thus we considered that these mutations were related to autosomal dominant deafness 20/26 (MIM: 604717) . GATA3 Patient 9. A five-year-old male with congenital moderate SN HL and bilateral renal cysts. He had a binaural conventional BTE hearing aid. No relevant family history was noted. WES identified a pathogenic heterozygous de novo mutation c.778 + 1G > A, p.?, in GATA3 . Patient 19. An 18-month-old male with a similar history to patient 9. He presented bilateral moderate SN HL, unilateral kidney dysplasia and cystic dilatation of the rete testis of the right testis. He had a bilateral conventional BTE hearing aid. WES identified a pathogenic, de novo heterozygous c.431delG, p.(Gly144Alafs*51) in GATA3 . GATA3 is associated with HDR syndrome, i.e., hypoparathyroidism, SN deafness and renal dysplasia. Hypoparathyroidism can appear later in life and both patients are under endocrinological surveillance . SLC17A8 Patient 10. A four-year-old-male with congenital bilateral, moderate SN HL. His maternal grandmother was also known for HL, without further information. He had a bilateral conventional BTE hearing aid. WES identified a likely pathogenic, heterozygous mutation (c.634C > A, p.(Pro212Thr)) in SLC17A8 inherited from his mother with normal audition. SLC17A8 is known to be associated with highly variable non-syndromic HL. Affected male members are reported with earlier onset and a more severe phenotype . LOXHD1 Patient 11. An eight-year-old female with bilateral moderate SN HL and no relevant family history. She had a bilateral conventional BTE hearing aid. WES identified a pathogenic homozygous mutation (c.3061 + 1G > A, p.?) in LOXHD1 . Parental segregation was confirmed in the mother, but was not available for the father. LOXHD1 is associated with autosomal recessive bilateral SN HL, which can be progressive. Mutations in this gene have also been recently associated with late-onset Fuchs corneal dystrophy and therefore ophthalmological surveillance was recommended . OTOA Patient 17. A three-year-old female with bilateral mild-to-moderate congenital SN HL born to consanguineous parents without any relevant family history. WES identified a paternal gene conversion between OTOA gene and OTOAP1 pseudogene and a maternal deletion of OTOA . Gene conversion between OTOA and its pseudogene OTOAP 1 is a known mechanism leading to the generation of a pathogenic OTOA allele . Exons 20 to 28 of the OTOA gene are located in a 68 kb region that was segmentally duplicated during evolution and resulted in the emergence of the OTOAP1 pseudogene located 820 kb upstream of OTOA. Therefore, these genes share a high level of homology (>99%). In our patient, gene conversion occurred between the exon 20–21 of the OTOA gene replaced by exon 1 and 2 of the pseudogene OTPAP1 . This gene conversion is expected to result in a premature stop codon that would either result in a truncated protein or an absence of protein through mRNA nonsense-mediated decay. These findings were confirmed by polymerase chain reaction/Sanger sequencing and MLPA. Family segregation confirmed the inheritance of each allele from one of the parents. OTOA is related to autosomal recessive non-syndromic HL . WSF1 Patient 18. A two-year-old female who suffered from congenital bilateral moderate SN HL. HL was progressive and she had profound bilateral deafness. No relevant family history was noted. She benefited from sequential bilateral cochlear implantation. WES identified a pathogenic de novo heterozygous mutation (c.2051C > T, p.(Ala684Val)) in WSF1 . WSF1 is associated with an autosomal dominant Wolfram-like syndrome, which associates progressive HL, optic atrophy and, later, diabetes mellitus. After diagnosis, the patient had an ophthalmological evaluation that revealed partial bilateral optic atrophy. Endocrinological follow-up was organized . STRC Patient 12. A seven-year-old male was diagnosed with mild bilateral SN HL when he started elementary school. He had a conventional binaural BTE hearing aid. MRI was normal, and there was no relevant family history. WES identified a compound heterozygous CKMT1B, STRC, CATSPER2 deletion, confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and (c.4917_4918delACinsCT, p.(Leu1640Phe)) in STRC . Family segregation was confirmed. Patient 13. A seven-year-old male with postlingual, moderate bilateral SN HL. The audiogram showed a “U-curve”. He had a conventional binaural BTE hearing aid. MRI was normal with no relevant family history. WES identified a compound heterozygous CKMT1B, STRC deletion and a CKMT1B , STRC , CATSPER2 deletion). This result was confirmed by MLPA (chr15:g.[(43851199_43890333)_(43897676_43924279)del];[(43851199_43890333)_(43940820_44038794)del] and family segregation. Patient 14. A nine-year-old female with moderate prelingual SN bilateral HL and no relevant family history. WES identified a compound heterozygous c.4425G > C, p.(Trp1475Cys) in STRC and CKMT1B, STRC, CATSPER2 deletion confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 15. A nine-year-old male with congenital moderate bilateral SN HL and no relevant family history. He had a conventional binaural BTE hearing aid. WES identified a homozygous deletion of CKMT1B, STRC, CATSPER2 confirmed by MLPA (chr15:g.(43851199_43890333)_(43940820_44038794)del) and family segregation confirmed bi-allelic inheritance. Patient 22. A 14-year-old female with moderate, prelingual bilateral SN HL. Her father suffered from moderate bilateral HL and her uncle suffered from unilateral moderate HL. She had a conventional BTE hearing aid since the age of one year. WES identified a heterozygous deletion of CKMT1B and STRC and probably CATSPER2 confirmed by MLPA (chr15:g.(43′851′199_43′890′391)_(?_44′038′820)del), as well as a heterozygous (c.4837G > T,p.(Glu1613*)) mutation in the STRC gene. Family segregation confirmed the inheritance of each allele from a healthy parent. STRC alterations cause autosomal recessive nonsyndromic SN deafness type-16. HL starts usually during childhood (birth to the age of 10 years). Contiguous gene deletion syndrome on chromosome 15q15.3, including STRC and CATSPER2 , as identified in patient 15, is responsible for a deafness-infertility syndrome. This syndrome is characterized by early-onset deafness in both males and females and associated with infertility in males . POU4F3 and OPA1 Patient 21. A six-year-old female with a moderate bilateral SN HL detected at elementary school screening. Her audiogram was “spoon”-shaped. She had a conventional binaural BTE hearing aid. Family history was positive on her father’s side (paternal uncle, grandfather and great-grandfather affected with adult onset moderate HL). Her father had encountered difficulties discriminating sounds since childhood, but he reported his audiological evaluation as normal at 18 years of age. WES revealed a heterozygous pathogenic mutation (c.502del, p.(Ala168Profs*36)) in POU4F3 inherited from the father and a likely pathogenic, heterozygous de novo mutation (c.1118C > G, p.(Ser373Cys)) in OPA1 , never reported previously. POU4F3 is associated with autosomal dominant deafness type 15, a progressive form of nonsyndromic SN HL. Onset is postlingual, usually between the second and sixth decades of life. Intrafamilial variability has been reported . OPA1 is related with optic atrophy and optic atrophy plus syndrome . Multisystem neurological disease involving optic atrophy, deafness and neuromuscular complications is associated with all types of mutations. However, optic atrophy plus syndrome is more frequent with a missense mutation in OPA1 gene, while classic optic atrophy is mostly associated with deletion. Both groups of mutations are most frequently observed in the GTPase domain of the OPA1 gene . The mutation carried by our patient was located in this GTPpase domain. Although she had a normal ophtalmological and neurological evaluation, close follow-up was recommended as the signs and symptoms can be progressive and highly variable, with a mean onset around 10 years of age . COL11A1 Patient 23. A three-year-old male was referred to an ENT specialist due to delayed speech. Audiograms showed mild prelingual SN bilateral HL. He benefited from neuropediatric evaluation because of interaction difficulties, excessive shyness and motor coordination problems. The family history revealed that his two brothers, his mother, two of his maternal aunts and his grandmother suffer from HL. No one was wearing hearing aids. Age of onset was highly variable (35 years for his mother and 10–15 years for his brothers). HL appeared to be isolated in the family and, in particular, there was no history of cleft palate. WES revealed a large heterozygous pathogenic deletion of COL11A1 (chr1:g.[(103388956_103400026)_(104094395_?)del]) inherited from his mother and present in both of the patient‘s brothers. The patient and his brothers underwent ophtalmological investigations, which were completely normal. Patient 26. A 15-year-old male with congenital bilateral moderate SN HL. His grandmother had a very late onset history of HL. Hearing aids were added sequentially (right ear at 2 years and left ear at 4 years). WES revealed a heterozygous likely pathogenic deletion of a splicing site in COL11A1 (c.4519-2Adel,p.?) inherited from his apparently asymptomatic mother who never benefited from an audiological examination. Close ophtalmological follow-up was recommended, even if HL seemed isolated. COL11A1 and SMAD3 Patient 24. An 18-year-old female with congenital bilateral moderate SN HL who had been wearing hearing aids since the age of 4 years. She was born with a cleft palate and was highly myopic. She reported bruising easily and chronic knee pain related to recurrent patella luxation. The family history revealed that her mother needed surgical correction of cervical vertebrae, complicated by severe hemorrhage, but without further information. She had two healthy brothers. All of her mother’s pregnancies were uncomplicated. WES revealed a heterozygous pathogenic de novo variant in COL11A1 (c.4547G > T, p.(Gly1516Val)) and a likely pathogenic heterozygous variant in SMAD3 (c.3G > A (p.Met1?)) inherited from her mother. COl11A1 is associated with Marshall syndrome and Stickler syndrome type II or autosomal dominant deafness type 37. Marshal syndrome is characterized by dysmorphic signs (microcretrognathia, long philtrum, robin sequence) and key clinical features, such as cleft palate, myopia and SN HL. Main complications are vitreoretinal degeneration, glaucoma and retinal detachment. Stickler syndrome is characterized by dysmorphic signs (micrognathia and Pierre Robin sequence with cleft palate), SN HL, early onset myopia, glaucoma and a risk of retinal detachment. Joint hypermobility is common and associated with an increased risk of early onset arthrosis . Patients 23 and 26 were affected only by HL, while patient 24 displayed clear signs of Stickler syndrome. It has recently been highlighted that COL11A1 is associated with nonsyndromic HL and should be included in nonsyndromic HL gene panels . SMAD3 is associated with Loeys-Dietz syndrome type 3, characterized by cardiac malformation (mitral valve prolapse, aortic insufficiency, left ventricular hypertrophy) and an increased risk of aortic aneurysm and dissection, arterial aneurysm and arterial tortuosity, pectus deformity, and an increased risk of internal organ ruptures . Patient 24 underwent extensive vascular investigations and had no signs of vascular involvement or characteristic Loeys-Dietz dysmorphic features. However, she did describe bruising easily, had fair skin and long fingers. Due to her young age, we proposed to introduce a regular vascular follow-up. TRIOBP Patient 25. A five-year-old female was referred to the ENT department for profound, seemingly isolated and congenital HL. No relevant family history was noted, but her parents were consanguineous. WES revealed homozygous pathogenic duplication in the TRIOBP gene (c.3214dup, p.ArgArg1072Profs*12). Family segregation confirmed a bi-allelic inheritance. The TRIOBP gene is associated with nonsyndromic autosomal recessive HL, which is usually bilateral and prelingual . TMPRSS3 Patient 27. A nine-year-old female was referred to the ENT department due to severe bilateral progressive HL. She had been wearing regular BTE hearing aids since the age of 8 years with poor results. The family history was unremarkable. WES revealed compound heterozygosity in the TMPRSS3 gene (c.400 A > T, p.(Lys134*); c.646C > T, p.(Arg216Cys)); both variants were reported as pathogenic. Family segregation confirmed that each variant was inherited from a healthy parent. Patient 29. A three-year-old male was referred to the ENT department due to speech delay. OAEs at birth were reported normal, but a hearing evaluation revealed severe bilateral “ski slope” pattern SN HL. He had a sister whose audition was in the normal range. His father reported unilateral HL since childhood. Hearing aids were implemented since the diagnosis. WES revealed compound heterozygosity in the TMPRSS3 gene (c.916G > A, p.(Ala306Thr); (c.749delT,p.(Leu250Argfs*25)); both variants were reported as pathogenic. Family segregation confirmed the bi-allelic inheritance of the variants. The TMPRSS3 gene is associated with nonsyndromic recessive SN HL type 8/10. HL can be pre- or post- lingual, depending on the type of mutation, and HL has been described as isolated . COL4A3 Patient 28. A 10-year-old male with mild bilateral SN HL. The family history revealed that both his father and grandfather suffered from mild HL, but progressive for his father. He had one healthy brother. We could not find an audiological examination (OEA) performed at birth, but screening at the age of 5 years was pathological. WES revealed a heterozygous, likely pathogenic mutation in COL4A3 inherited from his father (c.4826G > A, p.(Arg1609Gln)). COL4A3 mutations are associated with several phenotypes, such as Alport syndrome that associates renal failure, variable HL (which can be of late onset) and ocular involvement, including cataract and retinopathies. Heterozygous mutation in the COL4A3 gene responsible for autosomal dominant Alport syndrome might also generate isolated HL or HL with ocular involvement in some carriers . Neither our patient nor his father showed any sign of kidney or ocular involvement, but close follow-up was organized as phenotypic variability has been described, even in the same family . MARVELD2 Patient 31. A 16-year-old male presenting with severe bilateral HL. He wore regular BTE hearing aids. In addition, he suffered from cholestatic hepatopathy since infancy, as well as hyperactivity and impaired concentration. The family history was unremarkable. His parents of European descent were not related. WES revealed a homozygous pathogenic mutation in the MARVELD2 gene (c.1331 + 2T > C, p.. MARVELD2 is associated with autosomal recessive nonsyndromic HL type 49 more often found in the East Caucasian population. No association with hepatic involvement in this patient could be demonstrated. However, MARVELD2 is a tight junction protein and therefore we might reconsider this statement in the years to come . Indeed we might stretch out that a defect in tight junction protein could easily affect the integrity of the epithelia, and therefore, the function of the organ. MYO15A Patient 32. A two-year-old male with profound bilateral congenital HL. At birth, he presented with hypothyroidism and an atrial septal defect. He benefited from a sequential bilateral cochlear implant (Nov 2020, right side; Feb 2021, left side). He had one older sister without any hearing impairment. His paternal grandfather was reported with mild HL and his great-granduncle with very early HL (without further information). His parents have normal audition. WES revealed a homozygous variant in the MYO15A gene. Family segregation confirmed inheritance of each of the variants from a healthy parent. MYO15A gene is associated with autosomic recessive severe nonsyndromic HL type 3. HL is described as congenital and severe to profound . NF2 Patient 62. A 40-year-old male presented with unilateral progressive HL and bilateral tinnitus. MRI revealed bilateral schwannoma, which raised the hypothesis of neurofibromatosis type 2. Exome sequencing revealed a heterozygous pathogenic variant in the NF2 gene (c.1579G > T, p.(Glu527*)), which confirmed the diagnosis. A NF2 -related syndrome is characterized by the progressive appearance of vestibular schwannomas, which are usually bilateral and responsible for HL, and may or may not be associated with tinnitus. Schwannomas may also develop on other cranial, spinal or peripheric nerves with related symptoms. Some patients may develop intracranial or intraspinal meningiomas or a malignant tumor of the nervous system (ependymoma). Ophtalmological involvement, including cataract and loss of visual acuity, is quite common. Seventy percent of NF2 patients have cutaneous tumors . COCH Patient 63. A 66-year-old female with bilateral severe HL since 30 years. Lately, she noticed worsening of hearing impairment and bilateral vestibular areflexia. She wore regular hearing aids. Her sister had bilateral HL and wore hearing aids. Her brother and father displayed late-onset HL, but less severe. Her son is being evaluated for HL. WES revealed a heterozygous pathogenic mutation in the COCH gene (c.341T > C, p.Leu114Pro). The mutation associated with the COCH gene leads to progressive bilateral HL with autosomal dominant transmission. Vestibular involvement is frequent. Onset is described from 20 to 60 years of age . Patient 1 (described previously). In addition to the COL4A5 variant, she was found to carry two missense variants in compound heterozygosity in COL11A2 classified as VUS. Mutations in COL11A2 are known to be responsible for autosomal dominant and autosomal recessive nonprogressive profound, congenital or prelingual HL. This gene is also associated with a syndromic HL, otospondylomegaepiphyseal dysplasia . To date, our patient does not present any other signs and symptoms suggestive of a collagenopathy. Patient 33. A 12-year-old male with prelingual bilateral severe SN HL. His younger brother was also affected. They had no other signs and symptoms. WES identified compound heterozygous mutations (c.641G > A, p.(Arg214His)) and (c.643T > G, p.(Trp215Gly)) classified as VUS in TBC1D24 . Both variants were also present in the affected brother and family segregation was confirmed. Patient 64. A 33-year-old female referred to the ENT department due to bilateral HL since the age of 18 years, with moderate, but progressive bilateral SN HL. The family history revealed that her father suffered from late-onset unilateral HL. She was born to a consanguineous union. WES revealed a heterogygous mutation in TBC1D24 (c.418 C > G p.(Leu140Val)) classified as a VUS. TBC1D 24 has been described with autosomal recessive and autosomal dominant HL. Autosomal recessive TBC1D24 -related syndromes show a marked phenotypic pleiotropy with multisystem involvement. The severity spectrum ranges from isolated deafness to benign myoclonic epilepsy restricted to childhood with complete seizure control and normal intellect, to early-onset epileptic encephalopathy with severe developmental delay and early death. There is no distinct phenotypic correlation with the pathogenic variant type or location as yet, but patterns are emerging . Autosomal dominant TBC1D24 -related syndromes are marked by adult onset and progressive HL . For both families of patients 33 and 64, the variants did not fulfill ACMG criteria and were not counted as positive results . Patient 42. An 11-year old female with congenital bilateral non-syndromic SN HL. Neonatal hearing screening was abnormal and she underwent a cochlear implant. The family history revealed that she was born to a consanguineous union (2nd degree cousins), but with no history of HL. Her parents were of Egyptian origin. WES revealed a heterozygous variant in the CDH23 gene. This latter gene is associated with autosomal recessive HL, autosomal dominant Usher syndrome 1D or autosomal recessive/digenic Usher syndrome . A family segregation study was not possible and therefore pathogenicity through segregation could not be concluded. As ACMG criteria were not met, the variant was classified as a VUS , The patient will be re-evaluated on a regular basis. Patient 46. A 14-year-old male with HL detected at the age of 3 years who wore regular BTE hearing aids since then. At age 12, he benefited from a cochlear implant. He was diagnosed with attention deficit hyperactivity disorder and benefits from special support at school. The family history was unremarkable. WES revealed two variants in PCDH15 (c.4885delA, p.S1629fs & c.964T > A, p.Ser322Thr) and two variants in USH2A (c.13133C > T, p.Pro437Leu & c.6800C > T, p.Pro2267Leu). Family segregation confirmed the position in cis of all variants inherited from the healthy mother. All variants were classified as VUS. PCDH15 gene is associated with autosomal recessive HL type 23 and autosomal recessive/digenic Usher syndrome . USH2A gene is associated with autosomal recessive Usher syndrome type 2A and retinitis pigmentosa 38 . Patient 51. A 10-year-old female presenting with auditory neuropathy resulting in moderate HL. An attempt with hearing aids was not successful. The family history was unremarkable. WES revealed a heterozygous variant in the OSBPL2 gene (c.852_854delTATinsATG, p.(Phe284_Met285delinsLeuTrp)) inherited from the healthy mother. Therefore, this variant did not fulfill ACMG criteria and was not considered as a positive result . The OSBPL2 gene is responsible for autosomal dominant HL with high variability in terms of age of onset (5–32 years) and expressivity; HL is usually progressive . Patient 71. A three-year-old male with congenital severe bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). Patient 72. An eight-year old male with prelingual severe bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). Family segregation was confirmed. Patient 73. A five-year-old female with moderate congenital SN HL. Her mother had moderate-to-high frequency HL. She wore regular bilateral BTE hearing aids. Direct sequencing of the GJB2 and GJB6 locus revealed a compound heterozygous mutation in the GJB2 gene (c.59T > C, p.Ile20Thr and c.109G > A,p.Val37Ile). Family segregation was confirmed. Patient 74. A 6 year-old male with prelingual moderate bilateral SN HL and no relevant family history. Direct sequencing of GJB2 and GJB6 locus revealed a homozygous mutation in the GJB2 gene (c.269T > C, p.(Leu90Pro). Family segregation was confirmed. Patient 75. A 20-year old female with congenital severe HL. She wore regular BTE bilateral hearing aids and was referred to the genetics department because of her wish to conceive a child. She had two sisters with severe bilateral HL who also wore hearing aids. She had also one sister and one brother without any hearing impairment. Direct sequencing of the GJB2 locus revealed a homozygous deletion in the GJB2 gene (c.35delG, p.(Gly12Valfs*2). WES performed on a cohort of 61 children and 9 adult patients with HL identified a genetic etiology in 52.5% of children and 22.2% of adults. Our diagnostic yield in the child cohort was within the upper range (10–80%) of results published in the literature . A high diagnostic yield from genomic testing has been associated with the inclusion of patients with early onset HL and suspected genetic syndromes, but who have not undergone any previous genetic testing for common genes, such as GJB2-GJB6 . In our cohort, only three patients showed a GJB2 mutation revealed by exome sequencing. When including the four children excluded from our exome cohort, our diagnostic rate reached 55.4% in this population. The inclusion of greater numbers of deafness genes and the addition of CNV analysis can also increase the diagnostic yield . In our cases, we used large gene panels and pipeline analysis optimized to identify CNV. Seven cases were caused by CNVs detected by this algorithm, representing a substantial proportion of our cohort (21.9%). Therefore, it is important that the pipeline allows for the detection of such CNVs . The genetic causes of hearing loss have also been explored in different adult populations . Epidemiological studies focusing on the heritability of adult onset HL ranked it as between 19–53% in twin studies, without differentiation between monogenic and polygenic rates . However, a recent study suggests that the diagnostic rate can be very high if patients are carefully selected . In our cohort, the diagnostic rate was 22.2%. As we only included 9 adults with HL, this diagnostic rate has to be re-evaluated in a larger cohorts. A molecular diagnosis will help the medical team to adapt follow-up and provide appropriate genetic counseling. Diagnostic rates of up to 60–80% are expected in patients with suspected autosomal recessive non-syndromic congenital deafness. In our cohort of molecularly diagnosed patients, 43.75% were autosomal recessive non-syndromic cases (14 patients); three patients with autosomal recessive inheritance were syndromic. In total, 53% of patients had autosomal recessive inheritance patterns and 43.75% had autosomal dominant transmission, of which 11 were syndromic. One patient showed an X-linked inheritance pattern, which corresponds to expected rates . Both adults with a confirmed molecular diagnosis showed an autosomal dominant disorder. Autosomal dominant HL, especially later onset HL , is associated with genes that show incomplete penetrance and high variability and this is therefore not surprising. Our rate of syndromic HL in children (46.9% syndromic) is slightly higher compared with those reported in the literature for a Caucasian population (30%) . Among the 15 patients with syndromic HL, nine had other signs and symptoms at the time of diagnosis. For two cases, these additional signs were discovered thanks to the investigations launched after molecular diagnosis. The seven patients without any additional sign at the time of diagnosis are being closely followed-up according to molecular diagnosis recommendations. Molecular identification of these patients is extremely important as follow-up and treatment can be adapted in order to prevent potential complications. Mutations in the GJB2 gene are among the most frequent causes of non-syndromic congenital HL, with a variable range depending on ethnic variations (8% to 42.32%) . The second most frequent non-syndromic HL-associated genes are SLC26A4 and OTOF . In our cohort, we found STRC (five cases) to be the most frequent cause of non-syndromic congenital HL. Other genes that were most often altered in our population were ACTG1 ( n = 3), COL11A1 ( n = 2) and GJB2 ( n = 3). We did not identify any patient with SLC26A4 and OTOF variants. Interestingly, our data expand the phenotype of already known HL genes, and we report two patients with non-syndromic hearing loss and a pathogenic de novo variant in ACTG1 , responsible for DFNA20/DFNA26 and type 2 Baraitser-Winter syndrome. Therefore, our results contribute to expand the genotypic spectrum of ACTG1 , which is associated with postlingual progressive SN HL . We reported two patients with a TMPRSS3 variant, which is a rare cause of non-syndromic HL in Caucasian patients (<1% vs. 6.3% in our cohort), but more frequent among Pakistani (1.8%), Tunisian (5%), Korean (5.9%), and especially Turkish patients (12%). Patient 27 is of Italian origin and patient 29 has French origins, thus suggesting that TMPRSS3 mutations might be more frequently involved in non- syndromic HL than reported in the literature, even in Caucasian patients . As shown previously, the local epidemiology and diagnostic rate vary widely regarding ethnicity . Geneva is a city located at the crossroads of Europe and is known for its mixed ethnicity and thus our rates probably do not reflect a classical Western Europe Caucasian population epidemiology. VUS were identified in five children (8%) and one adult (11.1%). We recommend the regular re-analysis of noninformative exomes and exomes containing VUS every 18 to 24 months after the first analysis. As WES data is stored in our bioinformatics department, re-analysis of these exomes with newly-described genes can be easily performed. Two cases have potentially two molecular diagnoses for HL. Of note, these findings can make genetic counseling more difficult and should be handled with care. Reported rates of dual diagnosis are around 1% to 4% . In our cohort, two patients had a dual diagnosis (3.2%), which is concordant with the reported data in the published literature. No secondary accidental findings were identified as bioinformatic analyses were centered on the 189-gene panel for HL and ear malformations. In conclusion, exome DNA sequencing and analysis of pathology-related gene panel(s) has become the gold standard for the investigation of HL. Our results emphasize the advantages of a global approach with careful variant and case discussion involving a multidisciplinary team to obtain a genetic diagnosis of SN HL. In addition to its undeniable value in clinical practice, with a 50–60% genetic diagnostic yield for SN HL (including GJB2/GJB6 alterations), it improves prognostic accuracy, as well as genetic and reproductive counseling. Importantly, this approach can also reveal clinically-relevant undiagnosed syndromes, thus changing the outcome of the disorder and avoiding the occurrence of preventable complications. |
Eligibility Criteria in Advanced Urothelial Cancer Clinical Trials: An Assessment of Modernization and Inclusion | 746840f6-9df2-4a22-b97c-937c37083185 | 11947756 | Neoplasms[mh] | Introduction Urothelial carcinoma (UC) is the sixth most frequently diagnosed neoplasm in the United States . Patients diagnosed with locally advanced unresectable or metastatic UC face significant morbidity and mortality, with five‐year survival rates of less than 15% . To address this significant unmet need, several prominent clinical trials have led to the improvement in the standard of care and ultimately regulatory approval of novel therapeutic agents for patients with advanced UC, including immune checkpoint inhibitors (ICIs) targeting the programmed cell death (PD‐1) pathway, antibody‐drug conjugates, and targeted agents such as fibroblast growth factor receptor (FGFR) inhibitors . Clinical trials are pivotal for the investigation of novel therapeutic strategies to establish their safety and efficacy in a broad patient population. It is estimated that only around 6% of adult patients with cancer of any type participate in therapeutic clinical trials . This potentially creates a barrier to the applicability of clinical trial results. For treatment advances to benefit patients in clinical practice, the enrollment of individuals representative of the real‐world patient population is crucial. Despite recent efforts to streamline patient enrollment in clinical trials, broadening participation in trials continues to be challenging . Previous studies leveraging patient‐reported survey data as well as analysis of eligibility criteria provided in Investigational New Drug applications suggest clinical trials continue to have restrictive eligibility criteria that strongly favor lower‐risk patients . As experimental therapeutics have rapidly evolved with increasing complexity of early stage trial designs, eligibility criteria seem to have become more restrictive, with the number of requirements for phase 1 clinical trial enrollment increasing significantly . It is not surprising that these restrictive criteria can be carried through phase 2 and 3 trials, causing likely deleterious effects on patient diversity [ , , , , ]. Following a comprehensive review and analysis of investigational new drug applications in 2017, the American Society of Clinical Oncology (ASCO), Friends of Cancer Research (FCR), and the United States Food and Drug Administration (FDA) examined specific eligibility criteria to determine whether to modify existing definitions that would broaden eligibility for cancer clinical trials . The eligibility criteria examined in this joint statement included the inclusion of patients with treated or clinically stable brain metastases, pediatric patients older than 12 years, and HIV‐infected patients with low risk of AIDS‐related adverse outcomes, liberalizing creatinine clearance/glomerular filtration rate (CrCl/GFR) requirements, and permitting patients with prior or concurrent malignancies . However, a follow‐up investigation in 2021 found that many of the very same eligibility criteria continue to have a significant presence in clinical trial exclusion criteria . In this study utilizing publicly available information, we examined clinical trials conducted between 2012 and 2022 to evaluate the prevalence of overly restrictive eligibility criteria as defined in the 2017 FCR‐ASCO joint research statement in interventional clinical trials enrolling patients with locally advanced and metastatic UC.
Methods We collated protocols indexed on ClinicalTrials.gov and evaluated their relevance to locally advanced and metastatic UC. We utilized the Medical Subject Heading (MeSH) terms “(metastatic OR advanced OR stage 4 OR unresectable) AND (bladder cancer OR upper tract urothelial carcinoma OR upper tract urothelial cancer)”. Other inclusion criteria for selection of studies included having interventional therapeutic clinical trials enrolling patients aged ≥ 18 years, were phase 1–3, provided a study location, and was initiated between June 30, 2012, through June 30, 2022. Studies from all locations were eligible for analysis. Of trials fulfilling these requirements, we excluded basket clinical trials enrolling patients with multiple tumor types, low grade or early‐stage curable disease, and those that investigated local therapies such as surgery, ablative therapies such as radiation, or prognostic tools. Studies that did not provide sufficient information to determine eligibility criteria were excluded as well. In our assessment of qualifying clinical trials, we stratified eligibility criteria based on the previously mentioned 2017 FDA‐led initiative . Specifically, we analyzed the prevalence of criteria with regard to brain metastases, concurrent malignancies, and HIV infection. Additionally, we included hepatitis B and C infection as part of our analysis in exclusion criteria, due to its potential as a source of excessive exclusion within many clinical trials. We then stratified these eligibility criteria based on the language used and categorized them into three distinct categories: total exclusion (TE), conditional inclusion (CI), and not reported (NR). Criteria stratified as TE utilized a rigid language that would exclude a patient based on the criterion in question without an option for recourse. Criteria stratified as CI allowed patients to enroll if they met prespecified conditions within that criterion. Trials stratified as NR did not cite the eligibility criteria in question as necessary for participation in the trial. Additional trial protocol information, such as the specific focus of a bladder cancer trial or recruitment status, was also collected and tabulated (Table ). The process of classification and stratification is visualized in Figure . 2.1 Statistical Analysis Descriptive statistics were used to summarize the presence of the previously mentioned exclusion criteria within each trial. Inclusion criteria collected consisted of required patient absolute neutrophil count (ANC), platelet count, hemoglobin (Hgb), AST/ALT ratio, CrCl/GFR, creatinine levels, bilirubin levels, and KPS/ECOG scores. Statistical associations between the various types of studies, therapies evaluated, inclusion criteria such as the required ECOG performance status, and the presence/language of exclusion criteria were evaluated via Fisher's Exact tests utilizing R Statistical Software, version 4.2.1.
Statistical Analysis Descriptive statistics were used to summarize the presence of the previously mentioned exclusion criteria within each trial. Inclusion criteria collected consisted of required patient absolute neutrophil count (ANC), platelet count, hemoglobin (Hgb), AST/ALT ratio, CrCl/GFR, creatinine levels, bilirubin levels, and KPS/ECOG scores. Statistical associations between the various types of studies, therapies evaluated, inclusion criteria such as the required ECOG performance status, and the presence/language of exclusion criteria were evaluated via Fisher's Exact tests utilizing R Statistical Software, version 4.2.1.
Results In our initial search, we were able to identify 205 bladder cancer trials in total. Of these, 37 trials met our inclusion criteria and had adequate available data for assessment (18%). The majority of these trials were excluded due to either a lack of adequate information needed for analysis, or the trial in question concerned localized disease and/or treatment (Figure ; 82%). Of the qualifying trials, 11 evaluated immunotherapy (29.7%), eight evaluated targeted therapy (21.6%), and five evaluated chemotherapy (13.5%). The remaining 13 trials evaluated combination therapies involving at least two of the previous categories (35.1%). Most eligible studies were phase 2 (62.2%), with phase 3 and phase 1 trials accounting for 24.3% and 5.4% of trials, respectively. A subset of multiphase trials was present as well, with two phase 2/3 trials (5.4%) and one phase 1/2 trial (2.7%). All eligible trials enrolled patients with either locally advanced or metastatic cancer of the bladder or upper urothelial tract, which includes the ureter and renal pelvis. Of the 37 trials, 10 studies had specific requirements regarding biomarker positivity, with three requiring specific genetic mutations to be present, including FGFR/HRAS mutations and HER‐2 positivity (one each). Additionally, four trials specifically required eligible patients' disease to be either platinum‐refractory or otherwise ineligible for cisplatin‐based treatment. Inclusion criteria were ubiquitous throughout each eligible trial. Only 35% of studies allowed a maximum Eastern Oncology Cooperative Oncology Group (ECOG) Performance score of 2. All other studies (65%) restricted their maximum scores to 0 or 1. One study utilized the Karnofsky Performance Score (KPS) system, requiring a score of ≥ 70%. Bilirubin levels were ubiquitously required to be ≤ 1.5 times the upper limit of normal (ULN). Requisite hemoglobin levels, platelet count, and ANC varied between clinical trials, with more than two‐thirds of trials requiring ≥ 9 g/dL, ≥ 100,000/μL, and ≥ 1,500/μL, respectively. Minimum required creatinine clearance ranged from ≥ 25 mL/min (8.1%; n = 3), ≥ 30 mL/min (51.4% n = 19) and ≥ 35 mL/min (29.7%; n = 11). The remaining trials (8.1%; n = 3) did not report any requirement for CrCl/GFR. Whereas a requisite serum creatinine level was provided by most trials, a substantial number (43.2%) did not report a requirement. 16 (43.3%) utilized ≤ 1.5 × ULN as an upper limit, while four (10.8%) used less restrictive (≥ 2.0 × ULN) limits, and two (5.4%) utilized more restrictive (1.25 × ULN) limits. Many of the clinical trials had selectively restrictive criteria for patients with bladder cancer of variant histology. While 16 trials (43.2%) did not specify criteria for disease histology, three trials (8.1%) restricted patient accrual strictly to those who only had purely urothelial carcinoma and 16 trials (43.2%) allowed for inclusion of patients with mixed variant disease, so long as UC comprised over 50% of their disease histology. One trial (2.7%) allowed for mixed variants in the overall cohort under the stipulation that patients with pure adenocarcinoma or epidermoid carcinoma, as well as mixed or pure small‐cell neuroendocrine carcinoma, be excluded. Two trials (5.4%) allowed for mixed and pure variants of bladder cancer such as squamous cell, and one (2.7%) allowed pure and mixed variants so long as patients with UC histology comprised most of the overall trial patient population. HIV infection was found to be the most prevalent exclusion criterion, with all clinical trials reporting either TE (89.2%) or NR (10.8%). Hepatitis B/C infection was found to be the second most common exclusion criterion, with 21 trials (58.6%) endorsing total exclusionary language. The presence of brain metastases was found to be TE in 13 trials (35.1%). However, the presence of brain metastases was deemed CI in 17 trials (45.9%). Concurrent malignancies were found to potentially be the most explicitly inclusive of any exclusion criteria, with only one trial (2.7%) not reporting any such measures, and only two noted as TE (Table ; Figure ). 34 trials (91.9%) provided CI exclusion criteria for patients with concurrent malignancies. No statistically significant associations were found between the frequencies of therapy types involved in the clinical trial and the presence of exclusion criteria pertaining to brain metastases, concurrent malignancies, or hepatitis B/C infection ( p = 0.4628, 0.3173, and 0.1064, respectively). However, a significant association was noted between HIV infection and the type of therapy involved in the trial ( p = 0.003) (Table ). Of the clinical trials with no reported requirements regarding patients with HIV, chemotherapy trials were more likely to have NR language ( n = 3; 75%), whereas trials with TE language were more likely to involve combination therapy or immunotherapy interventions (39.4% and 33.3% respectively). No significant associations were found between the type of exclusion criteria and the clinical phase of the trial or ECOG performance status required for inclusion.
Discussion Our study is one of the first studies to investigate the prevalence of restrictive eligibility criteria in contemporary clinical trials focusing on advanced UC. Our study demonstrates that exclusion criteria considered excessively restrictive in the FCR‐ASCO joint statement continue to be prevalent in interventional clinical trials for locally advanced and metastatic UC . Exclusion criteria prevalence and rigidity were unequally distributed, with concurrent malignancies clearly having the most CI exclusion criteria, and hepatitis and HIV infection having the most TE exclusion criteria. Additionally, the type of therapy correlated significantly with the exclusionary language. Our study also demonstrated continued under‐representation of patients with variant histology in contemporary advanced UC trials. Although historically considered to be an orphan disease with limited treatment options, the past several years have seen a significant expansion of research efforts in UC, with multiple clinical trials investigating novel therapeutic strategies . Effective treatment of this disease presents several challenges, considering the higher median age at diagnosis, concurrent co‐morbidities pertaining to renal and cardiovascular disease, with significant attrition from one line of therapy to the next . Broadening eligibility criteria for UC clinical trials is crucial to ensure the enrollment of a representative patient population to enhance applicability in routine clinical practice. HIV infection was heavily exclusionary in most trials evaluated. However, there was a significantly higher prevalence of totally exclusive language in combination therapy and immunotherapy trials. The exclusion of patients with HIV from immunotherapy and combination therapy trials is likely driven by concern for existing immunosuppression caused by both the viral infection and its treatment as well as the potential for drug–drug interactions to occur under said conventional HIV treatment therapy. There is an inherent reliance on immune function to achieve an anti‐tumor immune response, which has been shown to vary widely among patients with HIV . However, outcomes in patients with HIV have shifted in the past decade, with effective treatment and durable control of viral load becoming increasingly commonplace. Recent studies have shown response rates comparable to the general population regarding immunotherapy treatment, utilizing agents such as nivolumab, provided patients had already undergone or were concurrently undergoing antiretroviral therapy (ART) . Furthermore, a recent study leveraging data from a multi‐institution international consortium demonstrated comparable efficacy of ICIs in patients with HIV and no increase in immune‐related adverse events irrespective of CD4 counts . The current systematic restriction of patients with HIV should be carefully re‐examined, and patients who are HIV positive should be considered a population of special interest and prioritized in future clinical trials. While hepatitis B and hepatitis C infection were found to be one of the more prevalent exclusion criteria, we did not observe any statistically significant skew toward a particular class of therapy. Similar to HIV infection, due to the advent and success of ART, patients infected with hepatitis C are now experiencing near‐normal life expectancies with proper management . Additionally, the prevalence of hepatitis B vaccination has caused an all‐time historical low in hepatitis B infection globally . Such diseases still require adequate management for both immune function and general metabolism involving the liver, hence the large number of CI clinical trials. Another concern for the enrollment of patients with hepatitis B or hepatitis C infection is the assumed potential to induce autoimmune hepatitis, as it is present in 5%–10% of patient cases utilizing ICI immunotherapy . However, evidence indicates that patients infected with hepatitis B or hepatitis C have not been found to have significantly increased ICI‐induced liver injury compared to the general population, and viral reactivation was only rarely observed in patients with hepatitis B undergoing ICI treatment . These results directly support including these patients in therapeutics research. Although relatively uncommon, brain and central nervous system (CNS) metastases originating from distant primary lesions are of serious concern in the treatment and management of patients with bladder cancer, occurring at a rate of approximately 0%–7% . Additionally, they are associated with numerous complications and a dismal prognosis historically, with a median survival rate of few months . Unique obstacles in treating the brain include the impact of treatment on patient cognition and the presence of tumor infiltration at the blood–brain barrier, which reduces the effectiveness of agents such as chemotherapeutics and small molecule/targeted therapies . Due to the relatively novel nature of these therapies, most clinical trials involving bladder cancer over the past decade have likely excluded brain metastases completely due to the vastly decreased OS and quality of life associated with this disease state. However, with the utilization of modern stereotactic radiation, durable intracranial control of disease can be achieved in a subset of patients. Many studies examined have provided stipulations for the inclusion of these patients; however, our findings indicate a significant presence of TE and NR clinical trials. Therefore, the inclusion/exclusion of patients with brain metastasis should be further nuanced and individualized, with more explicit enrollment pathways for patients with UC brain and CNS metastases. Concurrent malignancies were found to be the most conditionally inclusive of the exclusion criteria in the FCR‐ASCO statement within the past decade. Owing to the prevalence of concurrent malignancies in cancer patients broadly (15%), it has been demonstrated to be more beneficial and representative to include patients with such conditions, provided there is no significant interference originating from these conditions . It has been shown that older patients experience a higher number of concurrent malignancies . Thus, it can be surmised that older patients face greater exclusion than other patient populations, in part, on the basis of a higher prevalence of concurrent malignancies. This presents complications particular to clinical research in bladder cancer treatment, as the median age for bladder cancer diagnosis is 73 years, considerably older than the median age of diagnosis for many other neoplasms . The overall state of interventional clinical trials within the past decade seems to indicate a shift in disposition in favor of including patients with multiple malignancies. Ideally, the criteria for the inclusion of patients with concurrent malignancies should continue to be explicitly stated in an effort to include older patients who might otherwise be discouraged from enrolling . A primary concern in our investigation into bladder cancer clinical trials is the general paucity of trials permitting or specifically focusing on variant histology. While pure or mixed UC represents the most common histologic subtype, a small subset of patients with bladder or upper tract cancer have non‐urothelial histology . Though considered to have an aggressive disease course, they are frequently excluded from clinical trials and continue to remain a significant unmet need with limited prospective guidance on optimal treatment. Most trials analyzed provided no guidelines or outright barred patients without pure UC histology from enrolling (43.2% and 8.1%, respectively), while just 10.8% of trials expressly allowed both mixed and pure variant histological variants of bladder cancer. Although challenging, concerted efforts to develop clinical trials specific to rare or uncommon subtypes have been feasible and successful in other genitourinary tumor types, such as papillary renal cell carcinoma . A similar approach is needed in bladder cancer to individualize therapy based on the unique biology of these variants. Our study has several limitations inherent to an analysis of this nature. Firstly, we were restricted in our analysis to only trial protocols that possessed publicly available information and were within our determined timeframe. We selected our timeframe based on therapy relevancy and exclusion criteria, finding that analyzing clinical trials older than 10 years from our search date yielded results not relevant to the current field of investigative interventional therapies for bladder cancer. Our screening methods for eligible trials and the relatively few interventional clinical studies on systemic therapies being conducted for advanced bladder cancer limited our sample size. Lastly, we could only assess the common exclusion criteria for each of the 37 trials available based on published information. Analysis of eligibility criteria in recent landmark trials investigating the treatment of advanced UC, such as EV‐302, CheckMate‐901, BCL2001, or THOR, was limited. Publicly available information regarding these trials was not available to us within the timeframe of this analysis. Table provides an assessment of eligibility criteria within the four aforementioned trials using our analytical framework. As this table demonstrates, many of the eligibility criteria provided by these studies have similar shortcomings to those within our analysis cohort, with the notable exception of HIV infection. Three of four trials specified criteria in which a patient diagnosed with HIV could be admitted into these trials. This comes in stark contrast to our initial cohort, in which 89% of trials provided no conditions for the inclusion of these patients. Among these four trials, three allowed for minor components (< 50%) of variant disease histology, while one did not report specific histological criteria. Additional criteria that could potentially present a barrier to trial enrollment could not be examined.
Conclusion Our study demonstrated continued persistence of overly exclusionary clinical trial criteria as defined by the FCR –ASCO –FDA investigation in therapeutic trials focusing on UC. HIV and hepatitis B/C infection were exclusionary in the majority of trials, with a significant association observed between exclusionary language and therapeutic class. Few trials specified inclusion of bladder cancer of non‐urothelial histology, while many explicitly excluded patients with variant histology. Future efforts should focus on making the clinical trial eligibility criteria more inclusive to expand the benefit of novel therapeutics to a broader patient population.
Benjamin D. Mercier: conceptualization (equal), data curation (lead), formal analysis (supporting), investigation (equal), methodology (supporting), validation (equal), visualization (lead), writing – original draft (lead), writing – review and editing (lead). Ameish Govindarajan: conceptualization (equal), data curation (supporting), investigation (supporting), methodology (supporting), validation (lead), visualization (supporting), writing – original draft (supporting), writing – review and editing (supporting). Daniela V. Castro: conceptualization (equal), data curation (supporting), investigation (equal), methodology (equal), validation (supporting). Xiaochen Li: data curation (supporting), formal analysis (lead), methodology (supporting), validation (equal). Errol J. Philip: validation (equal), writing – review and editing (equal). Matthew I. Feng: data curation (supporting), investigation (supporting). Sweta R. Prajapati: data curation (supporting), investigation (supporting). Elyse H. Chan: data curation (supporting), investigation (supporting). Kyle O. Lee: data curation (supporting), investigation (supporting). Ishaan Sehgal: data curation (supporting), investigation (supporting). Jalen Patel: data curation (supporting), investigation (supporting). Anna O'Dell: data curation (supporting), investigation (supporting). Alexander Chehrazi‐Raffle: validation (supporting), writing – review and editing (supporting). Hedyeh Ebrahimi: validation (supporting), writing – review and editing (supporting). Adam Rock: validation (supporting), writing – review and editing (supporting). Zeynep Busra Zengin: validation (supporting), visualization (supporting). Luis A. Meza: validation (supporting). Nazli Dizman: validation (supporting). JoAnn Hsu: validation (supporting). Sandy Liu: validation (supporting). Tanya B. Dorff: validation (supporting). Sumanta K. Pal: conceptualization (supporting), supervision (supporting), validation (supporting), writing – review and editing (supporting). Abhishek Tripathi: conceptualization (equal), methodology (equal), project administration (lead), supervision (lead), validation (supporting), writing – review and editing (supporting).
The authors declare no conflicts of interest.
Data S1. Data S2.
|
A retrospective characterization of pediatric facemasks marketed in the United States and implications for future designs | de84bdf9-77ef-42eb-87d4-7ceee3d571c1 | 11412539 | Pediatrics[mh] | The Centers for Disease Control and Prevention (CDC) recommends that children aged 2 years and older wear masks to protect themselves and others from COVID-19, flu, and other illnesses. . The technological requirements for pediatric masks differ significantly from those of N95 respirators, which are intended for use by healthy adults after a medical screening by a licensed medical professional (per 29 Code of Federal Regulations 1910.134(e)(2)). Due to children’s unique anatomical and physiological characteristics, specialized masks tailored to their age group are necessary . Smaller-sized masks intended for adults are available, but they were not designed to consider the risk of asphyxiation in children, and hence are not recommended for pediatric use. Pediatric facemasks, due to their intended respiratory protection use, are expected to ensure a proper fit and have resistance to passage of air, allowing children to breathe comfortably while remaining well protected . Despite extensive research on N95 respirators, there has been limited focus on pediatric facemasks; currently there are no N95 respirators that are available for children. However, considering the widespread usage of masks during the COVID-19 pandemic, conducting additional research is crucial, especially regarding the fit and breathability of pediatric facemasks. Although several pediatric facemasks have been US FDA-cleared to date to be legally marketed within the US (using product code OXZ in ), unlike N95 respirators for which 85 LPM is a well-accepted maximum flow rate for filtration efficiency testing , there is no consensus on the maximum flow rate for testing filtration efficiency of pediatric facemasks. In lieu of any consensus, device manufacturers use the surgical facemask standard, wherein the face velocity specified varies widely, ranging from 0.5–25 cm/sec. In addition, this standard is not specifically tailored to account for the unique physiological conditions of children . Thus, one objective of this study is to determine the suitable flow rate for testing masks intended for the pediatric population . Furthermore, the absence of established guidelines for acceptable pressure drop in pediatric facemasks adds another layer of complexity. For N95 respirators, the pressure drop limit is again well established at 35 mmH 2 O for inhalation at 85 Liters/minute (LPM). However, studies have found that high breathing resistance (> 9 mmH 2 O) often causes discomfort in users . Hence, modern day respirators are typically designed to have a breathing resistance of 6–9 mmH 2 O. Unfortunately, there are no existing resources or references that provide information on the maximum permissible breathing resistance for pediatric facemasks. Thus, the second objective of this study was to draw insights from data related to adult N95 respirators to define an acceptable pressure drop for pediatric facemasks . In evaluating the fit of pediatric facemasks, it’s imperative to consider anthropometric requirements specific to the pediatric population as that is a FDA requirement for obtaining marketing clearance of pediatric facemasks in the US . However, to date, no methodologies have been developed to assess fit of pediatric facemasks and as a result, manufacturers have to often develop their own methods, which can often be inefficient and cost ineffective. Thus, a third objective of our study was to develop a method for assessing fit and breathing resistance of future pediatric facemask designs . Pediatric facemask models At the time of execution of this study, there were nine brands of pediatric facemasks with 510(k) clearance . However, out of these nine, only four were available in the US market for purchase and those were selected for investigation in this study. In the text, we have presented the results in the order of their clearance, providing a chronological perspective on the development and performance of these pediatric facemasks. All of these masks are indicated for one-time use in children in the age group of 4 to 12 years and recommended for use in a health care setting with appropriate adult supervision. Worst-case flow rate for testing While conducting tests on pediatric facemasks, determining an appropriate flow rate is crucial. The study commenced with a literature review, centering on the identification of suitable breathing flow rates for the pediatric population (S1 Text in ). Maximum permissible pressure drop Our search with terms like ’pediatric + breathing resistance’ revealed a considerable variation in reported inspiratory resistance within this age group of 2 to 14 years, spanning from a minimum of 0.37 to a maximum of 2.1 mmH 2 O/LPM . We were not able to locate any direct references which define an acceptable pressure drop for pediatric facemasks. Although a standard for general barrier face coverings does mention a maximum pressure drop of 5 mmH 2 O for higher performance barrier face coverings, it does not specify any acceptable values for children. Hence, we relied on extrapolating adult response for N95 respirators to come up with the maximum permissible pressure drop in pediatric facemasks. In adults, a higher tested pressure drop in respirators is reflected in increased breathing resistance, leading to discomfort and difficulty in breathing. To alleviate these symptoms, it is recommended to select respirators with a pressure drop below a certain threshold, typically in the range of 6–9 mmH 2 O . We assume that if the ratio of the resistance of pediatric facemasks by the total pressure drop in a pediatric lung maintained the same ratio as that of adults’ respirators compared to total pressure drop in adults, then we can deduce the maximum permissible pressure drop in pediatric masks as follows– M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n p e d i a t r i c f a c e m a s k s a t h i g h e s t f l o w r a t e f o r c h i l d r e n T o t a l p r e s s u r e d r o p d u e t o p e d i a t r i c l u n g s r e s i s t a n c e = M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n a d u l t r e s p i r a t o r s a t h i g h e s t f l o w r a t e f o r a d u l t s T o t a l p r e s s u r e d r o p d u e t o a d u l t l u n g s r e s i s t a n c e (1) Reorganizing above we get, M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n p e d i a t r i c f a c e m a s k s a t h i g h e s t f l o w r a t e f o r c h i l d r e n = M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n a d u l t r e s p i r a t o r s a t h i g h e s t f l o w r a t e f o r a d u l t s T o t a l p r e s s u r e d r o p d u e t o a d u l t l u n g s r e s i s t a n c e × T o t a l p r e s s u r e d r o p d u e t o p e d i a t r i c l u n g s r e s i s t a n c e (2) The right-hand side (RHS) of can be determined using literature (S5 Table in ) which then helps determine the left-hand side of the equation i.e. the maximum permissible pressure drop in pediatric masks (S2 Text in ). Measuring filtration efficiency and pressure drop We utilized our previous method for assessing filtration efficiency under ideal conditions that are without any leaks. However, for pressure drop measurements, we explored three distinct methods: 1) using whole masks with pleats , 2) using whole masks with pleats opened , and 3) measuring pressure drop using facemask coupons. The first two methods were conducted utilizing the experimental setup illustrated in . To assess pressure drop in the coupons, we considered a coupon size of 4.9 cm 2 and utilized a method outlined in the literature, British National Standards (BSEN) 14683:2019 , derived from MIL-M-36954C, and subsequently adapted here . These coupons were directly cut from the whole mask, with all the inner layers retained (as in a whole mask), unpleated and then recut so the cross-sectional area would be 4.9 cm 2 . Method for measuring fit and breathing resistance Our fit measurement method was adapted from our prior research on adult manikins . Fit testing and breathing resistance measurements were performed on additively manufactured headforms that featured a 5 mm imitation skin layer . The selected headforms (S4 Text in ) represented pediatric individuals aged 2 to 14 years, namely 2-year-old Betty, 5-year-old Roberta, 8-year-old Dizzy, 11-year-old Billie, and 14-year-old Louis as shown in S6-S8 Tables in . To assess the alignment of these manikins with the general population, we measured their facial dimensions—interpupillary distance, bizygomatic breadth, lower face height, and ear-sellion depth—on manikins we 3D printed in nylon and assembled with a skin-like silicone layer (methodology in S8 Text in ). We plotted these measurements against the U.S. average data for respective age groups and found them to fall within the average range for the pediatric population with the maximum deviation from the average ranging in between -3.5/+2.1% across the 5–14-year-old headforms (S6 Table in ). To create a headform representing a 2-year-old child (named Betty), we reduced the size of the Dizzy headform to 88.5% (S6 Table in ) as we were not able to locate an equivalent headform for that age in any publicly available reference. Dizzy was used for the scaling down (S8 Table in ) as it appeared to be a better representative of the average of various facial measurements compared to Roberta (S7 Table in ), based on the facial measurements of the U.S. average. Because pediatric facemask have ear loops/straps (and go behind ears) we incorporated a thinner 1mm thick skin-mimicking silicone region where the masks loops around the ears . Step by step description of the fit measurement protocol is provided in supporting information (S9 Text in ). A TSI Model 8048 Portacount Respirator Fit Tester which is used in quantitative fit testing in adults was used with the pediatric headforms. Fit-factor was measured for each flow rate individually by measuring the concentration of sodium chloride (generated using a TSI Model 3026) before (C in ) or after (C out ) the pediatric facemasks. F i t f a c t o r F F = C o u t / C i n (3) However, since children may engage in a variety of activities (moderate and heavy) while donning masks, an overall fit factor was determined across three flow rates 5, 30 and 45 LPM using the following equation, where FF1, FF2 and FF3 are the fit factors at 5, 30 and 45 LPM, respectively. O v e r a l l F F = 3 1 F F 1 + 1 F F 2 + 1 F F 3 (4) Airflow sampling was controlled by using mass flow controller (Alicat, Model # MCR-100SLPM-D) to maintain constant suction flow. To simulate human breathing and compare fit-testing results between constant suction flow and oscillatory flow, we used a QuickLung ® Breather breathing simulator (Ingmar Medical) (example flow profiles are provided in S1 Fig in S3 Text provided in ). At the time of execution of this study, there were nine brands of pediatric facemasks with 510(k) clearance . However, out of these nine, only four were available in the US market for purchase and those were selected for investigation in this study. In the text, we have presented the results in the order of their clearance, providing a chronological perspective on the development and performance of these pediatric facemasks. All of these masks are indicated for one-time use in children in the age group of 4 to 12 years and recommended for use in a health care setting with appropriate adult supervision. While conducting tests on pediatric facemasks, determining an appropriate flow rate is crucial. The study commenced with a literature review, centering on the identification of suitable breathing flow rates for the pediatric population (S1 Text in ). Our search with terms like ’pediatric + breathing resistance’ revealed a considerable variation in reported inspiratory resistance within this age group of 2 to 14 years, spanning from a minimum of 0.37 to a maximum of 2.1 mmH 2 O/LPM . We were not able to locate any direct references which define an acceptable pressure drop for pediatric facemasks. Although a standard for general barrier face coverings does mention a maximum pressure drop of 5 mmH 2 O for higher performance barrier face coverings, it does not specify any acceptable values for children. Hence, we relied on extrapolating adult response for N95 respirators to come up with the maximum permissible pressure drop in pediatric facemasks. In adults, a higher tested pressure drop in respirators is reflected in increased breathing resistance, leading to discomfort and difficulty in breathing. To alleviate these symptoms, it is recommended to select respirators with a pressure drop below a certain threshold, typically in the range of 6–9 mmH 2 O . We assume that if the ratio of the resistance of pediatric facemasks by the total pressure drop in a pediatric lung maintained the same ratio as that of adults’ respirators compared to total pressure drop in adults, then we can deduce the maximum permissible pressure drop in pediatric masks as follows– M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n p e d i a t r i c f a c e m a s k s a t h i g h e s t f l o w r a t e f o r c h i l d r e n T o t a l p r e s s u r e d r o p d u e t o p e d i a t r i c l u n g s r e s i s t a n c e = M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n a d u l t r e s p i r a t o r s a t h i g h e s t f l o w r a t e f o r a d u l t s T o t a l p r e s s u r e d r o p d u e t o a d u l t l u n g s r e s i s t a n c e (1) Reorganizing above we get, M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n p e d i a t r i c f a c e m a s k s a t h i g h e s t f l o w r a t e f o r c h i l d r e n = M a x i m u m p e r m i s s i b l e p r e s s u r e d r o p i n a d u l t r e s p i r a t o r s a t h i g h e s t f l o w r a t e f o r a d u l t s T o t a l p r e s s u r e d r o p d u e t o a d u l t l u n g s r e s i s t a n c e × T o t a l p r e s s u r e d r o p d u e t o p e d i a t r i c l u n g s r e s i s t a n c e (2) The right-hand side (RHS) of can be determined using literature (S5 Table in ) which then helps determine the left-hand side of the equation i.e. the maximum permissible pressure drop in pediatric masks (S2 Text in ). We utilized our previous method for assessing filtration efficiency under ideal conditions that are without any leaks. However, for pressure drop measurements, we explored three distinct methods: 1) using whole masks with pleats , 2) using whole masks with pleats opened , and 3) measuring pressure drop using facemask coupons. The first two methods were conducted utilizing the experimental setup illustrated in . To assess pressure drop in the coupons, we considered a coupon size of 4.9 cm 2 and utilized a method outlined in the literature, British National Standards (BSEN) 14683:2019 , derived from MIL-M-36954C, and subsequently adapted here . These coupons were directly cut from the whole mask, with all the inner layers retained (as in a whole mask), unpleated and then recut so the cross-sectional area would be 4.9 cm 2 . Our fit measurement method was adapted from our prior research on adult manikins . Fit testing and breathing resistance measurements were performed on additively manufactured headforms that featured a 5 mm imitation skin layer . The selected headforms (S4 Text in ) represented pediatric individuals aged 2 to 14 years, namely 2-year-old Betty, 5-year-old Roberta, 8-year-old Dizzy, 11-year-old Billie, and 14-year-old Louis as shown in S6-S8 Tables in . To assess the alignment of these manikins with the general population, we measured their facial dimensions—interpupillary distance, bizygomatic breadth, lower face height, and ear-sellion depth—on manikins we 3D printed in nylon and assembled with a skin-like silicone layer (methodology in S8 Text in ). We plotted these measurements against the U.S. average data for respective age groups and found them to fall within the average range for the pediatric population with the maximum deviation from the average ranging in between -3.5/+2.1% across the 5–14-year-old headforms (S6 Table in ). To create a headform representing a 2-year-old child (named Betty), we reduced the size of the Dizzy headform to 88.5% (S6 Table in ) as we were not able to locate an equivalent headform for that age in any publicly available reference. Dizzy was used for the scaling down (S8 Table in ) as it appeared to be a better representative of the average of various facial measurements compared to Roberta (S7 Table in ), based on the facial measurements of the U.S. average. Because pediatric facemask have ear loops/straps (and go behind ears) we incorporated a thinner 1mm thick skin-mimicking silicone region where the masks loops around the ears . Step by step description of the fit measurement protocol is provided in supporting information (S9 Text in ). A TSI Model 8048 Portacount Respirator Fit Tester which is used in quantitative fit testing in adults was used with the pediatric headforms. Fit-factor was measured for each flow rate individually by measuring the concentration of sodium chloride (generated using a TSI Model 3026) before (C in ) or after (C out ) the pediatric facemasks. F i t f a c t o r F F = C o u t / C i n (3) However, since children may engage in a variety of activities (moderate and heavy) while donning masks, an overall fit factor was determined across three flow rates 5, 30 and 45 LPM using the following equation, where FF1, FF2 and FF3 are the fit factors at 5, 30 and 45 LPM, respectively. O v e r a l l F F = 3 1 F F 1 + 1 F F 2 + 1 F F 3 (4) Airflow sampling was controlled by using mass flow controller (Alicat, Model # MCR-100SLPM-D) to maintain constant suction flow. To simulate human breathing and compare fit-testing results between constant suction flow and oscillatory flow, we used a QuickLung ® Breather breathing simulator (Ingmar Medical) (example flow profiles are provided in S1 Fig in S3 Text provided in ). Worst-case flow rate and pressure drop Literature review (S1 Text in ) revealed that a flow rate of 45–60 LPM encapsulates the upper limit experienced during vigorous physical activities in individuals under 18 years old. Since pediatric facemasks are indicated for a wide age range of 4–12 years where much lower flow rates are also likely, hence 45 LPM was chosen as a realistic worst-case scenario for evaluating pediatric face masks . The inspiratory resistance for adult respirators above which there can be discomfort from donning is 9 mmH 2 0 at 85 LPM . Independently, based on S5 Table in , the lowest inspirator resistance reported in children is 0.37 mmH 2 O/LPM , and children’s breathing flow rate during vigorous activities can reach 45 LPM. Using and assuming that the total pressure drop in adult lungs ~ 750 Pa (or = 76.5 mmH 2 O) at 105 LPM , and that it remains relatively unchanged at 85 LPM, the maximum permissible pressure drop in a pediatric face mask = 0.37 mmH 2 O/LPM × 45 LPM × 9.0/76.5 mmH 2 O = 2.04 mmH 2 O or ~ 2 mmH 2 O. Filtration efficiency in FDA cleared pediatric facemasks displays the filtration efficiency of different pediatric facemask brands. On average, findings across four brands indicated consistently high filtration efficiency at low flow rates (96% at 5 LPM), decreasing as flow rates increased (83% at 45 LPM). Brand A demonstrates the steepest (27%) reduction in filtration efficiency when flow rate increased from 5 to 45 LPM, while other brands experienced a lesser (~ 10%) reduction. The variation in filtration efficiency levels obtained for various brands of pediatric facemasks tested in our study aligns with filtration efficiency results reported previously . The brands of the pediatric facemasks in are categorized in chronological order of 510(k) clearance received from US FDA, revealing that later generations of pediatric facemasks exhibit higher filtration efficiency than their predecessors, indicating an improvement in subsequent facemask filtration performance. Pressure drop in FDA cleared pediatric facemasks Note that the pressure drop observed for the whole mask coupons is not the pressure that a user will experience as the pressure drop experienced by a user (also referred to as breathability or breathing resistance) is also a function of how well the mask is donned to the user’s face. The pressure drop measured on whole masks, nevertheless, provides an important insight as it enables a comparison across brands in a situation where there is no leakage. illustrates pressure drop values for four pediatric facemask brands at flow rates of 5, 30, and 45 LPM, utilizing the whole masks with pleats opened and with experimental setup depicted in . Keeping the surface area of the masks the same (approximately 85 cm 2 ), and consistent with previous studies , increasing flow rate from 5 LPM to 45 LPM shows an increase in the pressure drop for all brands of pediatric facemasks. However, there is large brand-to-brand variability. Brand C, for instance, experiences a three-fold higher pressure drop than brand A at 30 LPM and a similar 3.5-fold difference at a higher flow rate of 45 LPM. Liu et al. studied the relationship between filtration efficiency and pressure drop . Consistent with that study, facemask brand C exhibited the highest filtration efficiency among the four types of masks, accompanied by the highest pressure drop . When using the BSEN 14683:2019 method, the pressure drop is measured on a much smaller un-pleated coupon area (4.9 cm 2 compared to ~ 85 cm 2 ). Although, the flow rate is significantly lower, the face velocity for BSEN 14683:2019 (= 1.8 LPM/4.9 cm 2 = 6 cm/s) is comparable to the face velocity of facemasks with the pleats opened at the highest flow rate leading to similar pressure drops (since pressure drop linearly scales with velocity). Given the similarities in results reported with whole masks at 45 LPM and with coupons using BSEN 14683:2019, it may be simpler to use the BSEN 14683:2019 method at 1.8 LPM instead of measuring pressure drop of whole masks at multiple flow rates. Donning pediatric facemasks: Impact of the pleats While users are expected to open the pleats when donning a pediatric facemask, lack of instructions may inadvertently prompt children to wear the masks without opening the pleats. To understand the impact of unopened pleats on pressure drop, we first measured the pressure drop in all 4 brands of masks with the pleats opened, following which we then remeasured with the pleats unopened. Unopened pleats resulted in a clear increase in pressure drop with brands B and C showing a pressure drop increase that exceeded 25 mmH 2 O at 30 LPM. In addition, counterintuitively, using a facemask with pleats instead of offering additional filtration, also resulted in a decrease in filtration efficiency . For instance, facemask brand A and C with open pleats at a flow rate of 30 LPM exhibited a filtration efficiency of 71% and 94%, respectively, which reduced (p value = 0.038) to 63% and 86%, respectively. The unopened pleats resulted in about 18% less surface area, and since the face velocity increased in the facemask with pleats (for the same flow rate), and filtration efficiency reduces with increase in face velocity hence it resulted in a ~ 8–9% decrease in filtration efficiency for both brands A and C. The increase in pressure drop and reduction of filtration efficiency implies that when a child dons a pediatric facemasks with pleats unopened, these facemasks will likely be difficult to breathe through and would cause increased leakage of unfiltered aerosols resulting in poor protection to the user. The above results underscore the importance of opening the pleats before donning a pediatric facemask to reduce pressure drop , as well as to achieve maximum filtration through these facemasks . Lot to lot variability in facemasks To investigate potential variations in filtration efficiency and pressure drop among different lots of pediatric facemasks for the same brand, we conducted a limited lot-to-lot comparison involving brand A and C at flow rates of 5 and 30 LPM and found these were similar across two lots with no statistically significant differences found based on the student’s t-test (p > 0.05) (S4a and S4b Fig in ). Fit test and breathing resistance While the filtration efficiency and pressure drop measurements are important for thorough characterization of a pediatric facemask. Given the lack of clinical studies conducted with pediatric facemasks, it is challenging to derive any meaningful clinical implications based on 10–20% difference in filtration efficiency or 2–3 fold difference in pressure drop across various brands of pediatric facemasks. However, fit factor is a more meaningful metric as a fit factor of 2 versus 8 implies that a person wearing a mask with a higher fit is able to block out significantly greater number of aerosols (by a factor of 4) thus reducing risk of airborne-infection . Therefore, the subsequent sections delineate the brand-to-brand performances using the metric of fit-factor. Intended use—Fit factor measurements of child headforms across various brands of pediatric facemasks In , the results of the fit test on the 8-year-old Dizzy headform are illustrated. Brands A, B and D show higher fit factor at low flow rates that gradually declines with increasing flow rate which is representative of heavier activities that result in increased exertion during breathing and or increase of age (as inhalation flow rate increases with age). Curiously, Brand C, despite its higher filtration efficiency , consistently demonstrated a lower fit factor compared to other brands across all flow rates, potentially due to poorly deformable nosepieces that did not conform to the contour of the manikin nose leading to leaks. In contrast, brand A exhibits a fit factor 5 times higher than brand C at lower flow rates, which gradually decreases as the flow rate increases. Similarly, brand B and D displayed fit factors 8 and 10 times higher than brand C at lower flow rates, respectively. The observed trends are similar for the 11-year-old Billie and 5-year-old Roberta headforms and is reported in the form of overall fit-factor in the next section . illustrates breathing resistance measurements for four pediatric facemask brands on the 8-year-old Dizzy headform. As flow rates increase, breathing resistance increases across all brands, indicating difficulty in breathing at higher flows. The red dotted line indicates the 2 mmH 2 O that we determined by extrapolating from evidence on adult respirators. Majority of the brands exceed this 2 mmH 2 O threshold at the higher flow rates of 30–45 LPM. This suggests that children may experience more discomfort and breathing challenges when wearing masks during high-intensity activities (sports) or if they are older (as older age is likely to be associated with higher inspiratory flow rate). To investigate potential variations in fit and breathing resistance among pediatric facemasks from the same brand but different lots, a lot-to-lot comparison was conducted. S4c and S4d Fig in S6 Text of illustrate the overall fit factor and breathing resistance values for brands A and C at flow rates of 30 and 45 LPM. Both plots reveal no significant differences, as determined by Student’s t-test (p > 0.05) implying that our findings are likely valid across multiple lots of pediatric facemasks. To simulate realistic pediatric population breathing and compare fit-testing results between constant suction flow and oscillatory flow, we conducted fit tests on four brands of pediatric facemasks across three pediatric headforms. S5a Fig in S7 Text of illustrates the overall fit factor values for on the 8-year-old Dizzy headform using oscillatory and constant flow rates. On average, the overall fit factor for brands A, B, C, and D was 3, 6, 2, and 8, respectively. These values were found to be similar for oscillatory and constant flow rates based on Student’s t-test (p > 0.05) across all three headforms (S5a-S5c Fig in ) implying that a simpler constant suction flow rate test set up may be a good representative set up to measure overall fit factors. Off label use—Manikin fit measurements when used for younger or older children The overall fit and highest breathing resistance in headforms of 5, 8, and 11-year-olds, following the recommended age range of 4 to 12 years for pediatric facemask usage is shown in . This situation constitutes an intended use scenario as the pediatric facemasks FDA has cleared so far are for that specific intended population. We also explored mask performance on headforms representing ages outside this typical range, specifically 2- and 14-year-olds which we refer here as “off label” as these FDA cleared facemasks are not intended to be used for this younger and older age groups. On average, we observed a 43% decrease in the overall fit factor and a 33% decrease in breathing resistance for the 2-year-old headform compared to intended use scenarios. Similarly, a 22% decrease in the overall fit factor and a 31% decrease in breathing resistance were noted for the 14-year-old headform. This decline is attributed to pediatric facemasks not designed for children younger than 4 and older than 12, leading to inadequate face coverage, which concomitantly caused leaks and decreased overall quantitative fit factor and lowered breathing resistance. Given our findings of reduced overall fit in off-label situations, it underscores the need for developing pediatric facemasks for < 4 year and > 12-year age groups. This is particularly important as CDC recommends masks to be worn by children older than 2 years of age. An alternative for the older pediatric population (> 14 and above) would be use of N95 respirators which are indicated for use in workplaces and by adults. Although our findings for overall N95 respirator fit for the 14-year-old manikin was found to be very high (quantitative fit factor of 200, S12a Fig in ), the significant breathing resistance of N95 respirators relative to pediatric facemasks (S12b Fig in ) would likely hinder the practicality of this approach. It may be beneficial for the academic and the medical device community to engage in more research on developing respirator designs that may be suitable for older children, providing high quantitative fit (>10) compared to pediatric facemasks while maintaining low pressure drop and breathing resistance at reasonably high flow rates of 30–45 LPM. Practical implications for the community General public Proper Mask Usage Practices: Given the emphasis on opening pleats before donning the mask, it’s crucial for the general public, especially parents and caregivers, to be educated on proper mask-wearing practices to ensure optimal breathability and minimized leakage. Age-Appropriate Mask Selection: Parents should pay attention to age recommendations when choosing pediatric facemasks. This study suggests that masks not intended for children older than 12 or younger than 4 may lead to inadequate face coverage implying low quantitative fit. High filtration efficiency may not correlate with good fit: Parents and caregivers should ensure that the nose-clip of the facemask conforms to the child’s nose bridge. This will help ensure better fit and maximum protection to wearer. Without this step, even a higher filtration efficiency mask may not offer adequate protection to the wearer. Pediatric facemasks (device) manufacturers Design Optimization for Fit: Manufacturers may ensure that nose clips are designed for optimal fit and the clips can conform to the nose bridge adequately. This will help ensure proper fit and protection to the wearer. It may also be beneficial to develop a test method to assess the malleability of nose-clips and characterize the clips for better fits. Newer Masks for Specific Age Groups: Considering the absence of masks for children under 4 and over 12 years old, there is an opportunity for manufacturers to develop and introduce masks tailored to these age groups. This addresses a current gap and ensures a more comprehensive range of protective options for pediatric populations. Pressure Drop Considerations: There is a need for developing more optimal pediatric facemasks designs with minimal breathing resistance (~ 2 mmH 2 O) at relatively high flow rates (45 LPM) which is lower than the breathing resistance of < 5 mmH 2 O described in ASTM F3502 for Barrier Face Coverings . However, what breathing resistance may be optimal would likely depend on the age range the mask design is indicated for. Academia and future research Bench top studies: Using 3D-printed child manikins to assess fit across a broader spectrum of diverse anthropometric features. Breathability: Fit-testing performed on children with various brands of facemasks for determining what nominal fit-factor may offer reasonable protection to children and to assess if the brand-to-brand differences in pressure drop across pediatric facemasks are clinically meaningful. When making such assessments it would be important to first fully characterize the mask-brand used for filtration, pressure drop, as well as assess the performance of nose-bridge strips and ear loops. Additionally, development of pediatric facemasks that can be used by those younger than 4 years, or older than 12 years old. As well as development of a stratified optimal pressure drop range for various pediatric age groups including 2–4 years, 4–12 years, as well as those above 12 years of age. Inclusion of Various Ethnicities and Diversity: We did not consider various ethnicities due to data constraints, future research should strive to incorporate a more diverse demographic. This inclusivity would enhance the generalizability of findings and ensure that pediatric facemasks are evaluated across a spectrum of ethnic backgrounds. Assessment of the difficulties around pediatric mask-donning for children who may have developmental challenges or lung diseases (e.g. asthma). These studies should also include assessment of typical flow rates for diseased lungs so the masks designed can be tested at relevant flow rates. Long-Term Wear Effects: Investigating the prolonged use of pediatric facemasks among children could offer insights into the long-term effects, comfort, and potential challenges associated with extended wear. This aspect is particularly relevant in scenarios where continuous mask usage is required, such as in school settings. Activity Levels and User Experience: Exploring the impact of different activity levels on mask performance, and subjective experiences of children would provide valuable insights. Understanding how masks perform during various physical activities can guide the development of masks tailored to the diverse needs of active children, ensuring both protection and comfort. Considering factors such as comfort, breathability, and overall satisfaction, would contribute to increased compliance. Impact of Environmental Conditions: Considering the influence of environmental conditions, such as humidity and temperature, on mask performance would provide valuable information for ensuring effectiveness in various real-world situations. Incorporation of Patient-Specific Factors: Assessing how patient-specific factors, such as respiratory conditions or facial anatomy variations (beards for adolescents, injuries), may influence mask performance is an avenue for future exploration. This personalized approach could contribute to the development of more tailored and effective pediatric facemasks. Limitations Limited Range of Testing Exercises: Our study focused on assessing facemask fit based on specific breathing exercises (normal, deep breathing at 30 and 45 LPM). While these exercises provided valuable insights, it’s crucial to note that respirator fit testing typically performed on adults involves a more diverse range of movements, such as head turns, up-and-down motions, and talking , and is itself not intended to fully reproduce motions of subjects in a true Workplace Protection Factor study. The absence of these additional movements in our assessment could impact the findings. Individual Variability: Despite our efforts to cover a spectrum of age groups using different pediatric headforms, human facial features vary widely among individuals. Factors like ethnicity, age, gender, and facial dimensions significantly influence facemask fit and breathing resistance. Our study’s reliance on specific headforms may not fully encapsulate this diversity in the pediatric population. Simplifications in Skin Thickness: Because of lack of information and to simplify our methodology we didn’t incorporate variable skin thickness in our headforms. Although our previous research on adults suggested that these simplifications didn’t significantly affect the results when compared to adult N95 respirator fit testing, it’s essential to recognize that pediatric facial structures might respond differently. The absence of variable skin thickness might influence the accuracy of our fit-testing measurements. Given the simplifications in our study, our quantitative fit results should be interpreted with caution. Breathing Simulation: While we did not use breathing simulator extensively, the limited studies we conducted demonstrated similarity between constant and oscillatory flow rates. Ethnic Diversity: Various ethnicities were not considered due to a lack of data. While not studied in this context, the protocols described can still be used by modifying headform measurements to further our understanding of the impact of various racial ethnicities on fit. Breathability in Diseased Lungs: What amount of breathing resistance would be tolerable for children with asthma or other conditions was not studied. Literature review (S1 Text in ) revealed that a flow rate of 45–60 LPM encapsulates the upper limit experienced during vigorous physical activities in individuals under 18 years old. Since pediatric facemasks are indicated for a wide age range of 4–12 years where much lower flow rates are also likely, hence 45 LPM was chosen as a realistic worst-case scenario for evaluating pediatric face masks . The inspiratory resistance for adult respirators above which there can be discomfort from donning is 9 mmH 2 0 at 85 LPM . Independently, based on S5 Table in , the lowest inspirator resistance reported in children is 0.37 mmH 2 O/LPM , and children’s breathing flow rate during vigorous activities can reach 45 LPM. Using and assuming that the total pressure drop in adult lungs ~ 750 Pa (or = 76.5 mmH 2 O) at 105 LPM , and that it remains relatively unchanged at 85 LPM, the maximum permissible pressure drop in a pediatric face mask = 0.37 mmH 2 O/LPM × 45 LPM × 9.0/76.5 mmH 2 O = 2.04 mmH 2 O or ~ 2 mmH 2 O. displays the filtration efficiency of different pediatric facemask brands. On average, findings across four brands indicated consistently high filtration efficiency at low flow rates (96% at 5 LPM), decreasing as flow rates increased (83% at 45 LPM). Brand A demonstrates the steepest (27%) reduction in filtration efficiency when flow rate increased from 5 to 45 LPM, while other brands experienced a lesser (~ 10%) reduction. The variation in filtration efficiency levels obtained for various brands of pediatric facemasks tested in our study aligns with filtration efficiency results reported previously . The brands of the pediatric facemasks in are categorized in chronological order of 510(k) clearance received from US FDA, revealing that later generations of pediatric facemasks exhibit higher filtration efficiency than their predecessors, indicating an improvement in subsequent facemask filtration performance. Note that the pressure drop observed for the whole mask coupons is not the pressure that a user will experience as the pressure drop experienced by a user (also referred to as breathability or breathing resistance) is also a function of how well the mask is donned to the user’s face. The pressure drop measured on whole masks, nevertheless, provides an important insight as it enables a comparison across brands in a situation where there is no leakage. illustrates pressure drop values for four pediatric facemask brands at flow rates of 5, 30, and 45 LPM, utilizing the whole masks with pleats opened and with experimental setup depicted in . Keeping the surface area of the masks the same (approximately 85 cm 2 ), and consistent with previous studies , increasing flow rate from 5 LPM to 45 LPM shows an increase in the pressure drop for all brands of pediatric facemasks. However, there is large brand-to-brand variability. Brand C, for instance, experiences a three-fold higher pressure drop than brand A at 30 LPM and a similar 3.5-fold difference at a higher flow rate of 45 LPM. Liu et al. studied the relationship between filtration efficiency and pressure drop . Consistent with that study, facemask brand C exhibited the highest filtration efficiency among the four types of masks, accompanied by the highest pressure drop . When using the BSEN 14683:2019 method, the pressure drop is measured on a much smaller un-pleated coupon area (4.9 cm 2 compared to ~ 85 cm 2 ). Although, the flow rate is significantly lower, the face velocity for BSEN 14683:2019 (= 1.8 LPM/4.9 cm 2 = 6 cm/s) is comparable to the face velocity of facemasks with the pleats opened at the highest flow rate leading to similar pressure drops (since pressure drop linearly scales with velocity). Given the similarities in results reported with whole masks at 45 LPM and with coupons using BSEN 14683:2019, it may be simpler to use the BSEN 14683:2019 method at 1.8 LPM instead of measuring pressure drop of whole masks at multiple flow rates. While users are expected to open the pleats when donning a pediatric facemask, lack of instructions may inadvertently prompt children to wear the masks without opening the pleats. To understand the impact of unopened pleats on pressure drop, we first measured the pressure drop in all 4 brands of masks with the pleats opened, following which we then remeasured with the pleats unopened. Unopened pleats resulted in a clear increase in pressure drop with brands B and C showing a pressure drop increase that exceeded 25 mmH 2 O at 30 LPM. In addition, counterintuitively, using a facemask with pleats instead of offering additional filtration, also resulted in a decrease in filtration efficiency . For instance, facemask brand A and C with open pleats at a flow rate of 30 LPM exhibited a filtration efficiency of 71% and 94%, respectively, which reduced (p value = 0.038) to 63% and 86%, respectively. The unopened pleats resulted in about 18% less surface area, and since the face velocity increased in the facemask with pleats (for the same flow rate), and filtration efficiency reduces with increase in face velocity hence it resulted in a ~ 8–9% decrease in filtration efficiency for both brands A and C. The increase in pressure drop and reduction of filtration efficiency implies that when a child dons a pediatric facemasks with pleats unopened, these facemasks will likely be difficult to breathe through and would cause increased leakage of unfiltered aerosols resulting in poor protection to the user. The above results underscore the importance of opening the pleats before donning a pediatric facemask to reduce pressure drop , as well as to achieve maximum filtration through these facemasks . To investigate potential variations in filtration efficiency and pressure drop among different lots of pediatric facemasks for the same brand, we conducted a limited lot-to-lot comparison involving brand A and C at flow rates of 5 and 30 LPM and found these were similar across two lots with no statistically significant differences found based on the student’s t-test (p > 0.05) (S4a and S4b Fig in ). While the filtration efficiency and pressure drop measurements are important for thorough characterization of a pediatric facemask. Given the lack of clinical studies conducted with pediatric facemasks, it is challenging to derive any meaningful clinical implications based on 10–20% difference in filtration efficiency or 2–3 fold difference in pressure drop across various brands of pediatric facemasks. However, fit factor is a more meaningful metric as a fit factor of 2 versus 8 implies that a person wearing a mask with a higher fit is able to block out significantly greater number of aerosols (by a factor of 4) thus reducing risk of airborne-infection . Therefore, the subsequent sections delineate the brand-to-brand performances using the metric of fit-factor. Intended use—Fit factor measurements of child headforms across various brands of pediatric facemasks In , the results of the fit test on the 8-year-old Dizzy headform are illustrated. Brands A, B and D show higher fit factor at low flow rates that gradually declines with increasing flow rate which is representative of heavier activities that result in increased exertion during breathing and or increase of age (as inhalation flow rate increases with age). Curiously, Brand C, despite its higher filtration efficiency , consistently demonstrated a lower fit factor compared to other brands across all flow rates, potentially due to poorly deformable nosepieces that did not conform to the contour of the manikin nose leading to leaks. In contrast, brand A exhibits a fit factor 5 times higher than brand C at lower flow rates, which gradually decreases as the flow rate increases. Similarly, brand B and D displayed fit factors 8 and 10 times higher than brand C at lower flow rates, respectively. The observed trends are similar for the 11-year-old Billie and 5-year-old Roberta headforms and is reported in the form of overall fit-factor in the next section . illustrates breathing resistance measurements for four pediatric facemask brands on the 8-year-old Dizzy headform. As flow rates increase, breathing resistance increases across all brands, indicating difficulty in breathing at higher flows. The red dotted line indicates the 2 mmH 2 O that we determined by extrapolating from evidence on adult respirators. Majority of the brands exceed this 2 mmH 2 O threshold at the higher flow rates of 30–45 LPM. This suggests that children may experience more discomfort and breathing challenges when wearing masks during high-intensity activities (sports) or if they are older (as older age is likely to be associated with higher inspiratory flow rate). To investigate potential variations in fit and breathing resistance among pediatric facemasks from the same brand but different lots, a lot-to-lot comparison was conducted. S4c and S4d Fig in S6 Text of illustrate the overall fit factor and breathing resistance values for brands A and C at flow rates of 30 and 45 LPM. Both plots reveal no significant differences, as determined by Student’s t-test (p > 0.05) implying that our findings are likely valid across multiple lots of pediatric facemasks. To simulate realistic pediatric population breathing and compare fit-testing results between constant suction flow and oscillatory flow, we conducted fit tests on four brands of pediatric facemasks across three pediatric headforms. S5a Fig in S7 Text of illustrates the overall fit factor values for on the 8-year-old Dizzy headform using oscillatory and constant flow rates. On average, the overall fit factor for brands A, B, C, and D was 3, 6, 2, and 8, respectively. These values were found to be similar for oscillatory and constant flow rates based on Student’s t-test (p > 0.05) across all three headforms (S5a-S5c Fig in ) implying that a simpler constant suction flow rate test set up may be a good representative set up to measure overall fit factors. Off label use—Manikin fit measurements when used for younger or older children The overall fit and highest breathing resistance in headforms of 5, 8, and 11-year-olds, following the recommended age range of 4 to 12 years for pediatric facemask usage is shown in . This situation constitutes an intended use scenario as the pediatric facemasks FDA has cleared so far are for that specific intended population. We also explored mask performance on headforms representing ages outside this typical range, specifically 2- and 14-year-olds which we refer here as “off label” as these FDA cleared facemasks are not intended to be used for this younger and older age groups. On average, we observed a 43% decrease in the overall fit factor and a 33% decrease in breathing resistance for the 2-year-old headform compared to intended use scenarios. Similarly, a 22% decrease in the overall fit factor and a 31% decrease in breathing resistance were noted for the 14-year-old headform. This decline is attributed to pediatric facemasks not designed for children younger than 4 and older than 12, leading to inadequate face coverage, which concomitantly caused leaks and decreased overall quantitative fit factor and lowered breathing resistance. Given our findings of reduced overall fit in off-label situations, it underscores the need for developing pediatric facemasks for < 4 year and > 12-year age groups. This is particularly important as CDC recommends masks to be worn by children older than 2 years of age. An alternative for the older pediatric population (> 14 and above) would be use of N95 respirators which are indicated for use in workplaces and by adults. Although our findings for overall N95 respirator fit for the 14-year-old manikin was found to be very high (quantitative fit factor of 200, S12a Fig in ), the significant breathing resistance of N95 respirators relative to pediatric facemasks (S12b Fig in ) would likely hinder the practicality of this approach. It may be beneficial for the academic and the medical device community to engage in more research on developing respirator designs that may be suitable for older children, providing high quantitative fit (>10) compared to pediatric facemasks while maintaining low pressure drop and breathing resistance at reasonably high flow rates of 30–45 LPM. In , the results of the fit test on the 8-year-old Dizzy headform are illustrated. Brands A, B and D show higher fit factor at low flow rates that gradually declines with increasing flow rate which is representative of heavier activities that result in increased exertion during breathing and or increase of age (as inhalation flow rate increases with age). Curiously, Brand C, despite its higher filtration efficiency , consistently demonstrated a lower fit factor compared to other brands across all flow rates, potentially due to poorly deformable nosepieces that did not conform to the contour of the manikin nose leading to leaks. In contrast, brand A exhibits a fit factor 5 times higher than brand C at lower flow rates, which gradually decreases as the flow rate increases. Similarly, brand B and D displayed fit factors 8 and 10 times higher than brand C at lower flow rates, respectively. The observed trends are similar for the 11-year-old Billie and 5-year-old Roberta headforms and is reported in the form of overall fit-factor in the next section . illustrates breathing resistance measurements for four pediatric facemask brands on the 8-year-old Dizzy headform. As flow rates increase, breathing resistance increases across all brands, indicating difficulty in breathing at higher flows. The red dotted line indicates the 2 mmH 2 O that we determined by extrapolating from evidence on adult respirators. Majority of the brands exceed this 2 mmH 2 O threshold at the higher flow rates of 30–45 LPM. This suggests that children may experience more discomfort and breathing challenges when wearing masks during high-intensity activities (sports) or if they are older (as older age is likely to be associated with higher inspiratory flow rate). To investigate potential variations in fit and breathing resistance among pediatric facemasks from the same brand but different lots, a lot-to-lot comparison was conducted. S4c and S4d Fig in S6 Text of illustrate the overall fit factor and breathing resistance values for brands A and C at flow rates of 30 and 45 LPM. Both plots reveal no significant differences, as determined by Student’s t-test (p > 0.05) implying that our findings are likely valid across multiple lots of pediatric facemasks. To simulate realistic pediatric population breathing and compare fit-testing results between constant suction flow and oscillatory flow, we conducted fit tests on four brands of pediatric facemasks across three pediatric headforms. S5a Fig in S7 Text of illustrates the overall fit factor values for on the 8-year-old Dizzy headform using oscillatory and constant flow rates. On average, the overall fit factor for brands A, B, C, and D was 3, 6, 2, and 8, respectively. These values were found to be similar for oscillatory and constant flow rates based on Student’s t-test (p > 0.05) across all three headforms (S5a-S5c Fig in ) implying that a simpler constant suction flow rate test set up may be a good representative set up to measure overall fit factors. The overall fit and highest breathing resistance in headforms of 5, 8, and 11-year-olds, following the recommended age range of 4 to 12 years for pediatric facemask usage is shown in . This situation constitutes an intended use scenario as the pediatric facemasks FDA has cleared so far are for that specific intended population. We also explored mask performance on headforms representing ages outside this typical range, specifically 2- and 14-year-olds which we refer here as “off label” as these FDA cleared facemasks are not intended to be used for this younger and older age groups. On average, we observed a 43% decrease in the overall fit factor and a 33% decrease in breathing resistance for the 2-year-old headform compared to intended use scenarios. Similarly, a 22% decrease in the overall fit factor and a 31% decrease in breathing resistance were noted for the 14-year-old headform. This decline is attributed to pediatric facemasks not designed for children younger than 4 and older than 12, leading to inadequate face coverage, which concomitantly caused leaks and decreased overall quantitative fit factor and lowered breathing resistance. Given our findings of reduced overall fit in off-label situations, it underscores the need for developing pediatric facemasks for < 4 year and > 12-year age groups. This is particularly important as CDC recommends masks to be worn by children older than 2 years of age. An alternative for the older pediatric population (> 14 and above) would be use of N95 respirators which are indicated for use in workplaces and by adults. Although our findings for overall N95 respirator fit for the 14-year-old manikin was found to be very high (quantitative fit factor of 200, S12a Fig in ), the significant breathing resistance of N95 respirators relative to pediatric facemasks (S12b Fig in ) would likely hinder the practicality of this approach. It may be beneficial for the academic and the medical device community to engage in more research on developing respirator designs that may be suitable for older children, providing high quantitative fit (>10) compared to pediatric facemasks while maintaining low pressure drop and breathing resistance at reasonably high flow rates of 30–45 LPM. General public Proper Mask Usage Practices: Given the emphasis on opening pleats before donning the mask, it’s crucial for the general public, especially parents and caregivers, to be educated on proper mask-wearing practices to ensure optimal breathability and minimized leakage. Age-Appropriate Mask Selection: Parents should pay attention to age recommendations when choosing pediatric facemasks. This study suggests that masks not intended for children older than 12 or younger than 4 may lead to inadequate face coverage implying low quantitative fit. High filtration efficiency may not correlate with good fit: Parents and caregivers should ensure that the nose-clip of the facemask conforms to the child’s nose bridge. This will help ensure better fit and maximum protection to wearer. Without this step, even a higher filtration efficiency mask may not offer adequate protection to the wearer. Pediatric facemasks (device) manufacturers Design Optimization for Fit: Manufacturers may ensure that nose clips are designed for optimal fit and the clips can conform to the nose bridge adequately. This will help ensure proper fit and protection to the wearer. It may also be beneficial to develop a test method to assess the malleability of nose-clips and characterize the clips for better fits. Newer Masks for Specific Age Groups: Considering the absence of masks for children under 4 and over 12 years old, there is an opportunity for manufacturers to develop and introduce masks tailored to these age groups. This addresses a current gap and ensures a more comprehensive range of protective options for pediatric populations. Pressure Drop Considerations: There is a need for developing more optimal pediatric facemasks designs with minimal breathing resistance (~ 2 mmH 2 O) at relatively high flow rates (45 LPM) which is lower than the breathing resistance of < 5 mmH 2 O described in ASTM F3502 for Barrier Face Coverings . However, what breathing resistance may be optimal would likely depend on the age range the mask design is indicated for. Academia and future research Bench top studies: Using 3D-printed child manikins to assess fit across a broader spectrum of diverse anthropometric features. Breathability: Fit-testing performed on children with various brands of facemasks for determining what nominal fit-factor may offer reasonable protection to children and to assess if the brand-to-brand differences in pressure drop across pediatric facemasks are clinically meaningful. When making such assessments it would be important to first fully characterize the mask-brand used for filtration, pressure drop, as well as assess the performance of nose-bridge strips and ear loops. Additionally, development of pediatric facemasks that can be used by those younger than 4 years, or older than 12 years old. As well as development of a stratified optimal pressure drop range for various pediatric age groups including 2–4 years, 4–12 years, as well as those above 12 years of age. Inclusion of Various Ethnicities and Diversity: We did not consider various ethnicities due to data constraints, future research should strive to incorporate a more diverse demographic. This inclusivity would enhance the generalizability of findings and ensure that pediatric facemasks are evaluated across a spectrum of ethnic backgrounds. Assessment of the difficulties around pediatric mask-donning for children who may have developmental challenges or lung diseases (e.g. asthma). These studies should also include assessment of typical flow rates for diseased lungs so the masks designed can be tested at relevant flow rates. Long-Term Wear Effects: Investigating the prolonged use of pediatric facemasks among children could offer insights into the long-term effects, comfort, and potential challenges associated with extended wear. This aspect is particularly relevant in scenarios where continuous mask usage is required, such as in school settings. Activity Levels and User Experience: Exploring the impact of different activity levels on mask performance, and subjective experiences of children would provide valuable insights. Understanding how masks perform during various physical activities can guide the development of masks tailored to the diverse needs of active children, ensuring both protection and comfort. Considering factors such as comfort, breathability, and overall satisfaction, would contribute to increased compliance. Impact of Environmental Conditions: Considering the influence of environmental conditions, such as humidity and temperature, on mask performance would provide valuable information for ensuring effectiveness in various real-world situations. Incorporation of Patient-Specific Factors: Assessing how patient-specific factors, such as respiratory conditions or facial anatomy variations (beards for adolescents, injuries), may influence mask performance is an avenue for future exploration. This personalized approach could contribute to the development of more tailored and effective pediatric facemasks. Proper Mask Usage Practices: Given the emphasis on opening pleats before donning the mask, it’s crucial for the general public, especially parents and caregivers, to be educated on proper mask-wearing practices to ensure optimal breathability and minimized leakage. Age-Appropriate Mask Selection: Parents should pay attention to age recommendations when choosing pediatric facemasks. This study suggests that masks not intended for children older than 12 or younger than 4 may lead to inadequate face coverage implying low quantitative fit. High filtration efficiency may not correlate with good fit: Parents and caregivers should ensure that the nose-clip of the facemask conforms to the child’s nose bridge. This will help ensure better fit and maximum protection to wearer. Without this step, even a higher filtration efficiency mask may not offer adequate protection to the wearer. Design Optimization for Fit: Manufacturers may ensure that nose clips are designed for optimal fit and the clips can conform to the nose bridge adequately. This will help ensure proper fit and protection to the wearer. It may also be beneficial to develop a test method to assess the malleability of nose-clips and characterize the clips for better fits. Newer Masks for Specific Age Groups: Considering the absence of masks for children under 4 and over 12 years old, there is an opportunity for manufacturers to develop and introduce masks tailored to these age groups. This addresses a current gap and ensures a more comprehensive range of protective options for pediatric populations. Pressure Drop Considerations: There is a need for developing more optimal pediatric facemasks designs with minimal breathing resistance (~ 2 mmH 2 O) at relatively high flow rates (45 LPM) which is lower than the breathing resistance of < 5 mmH 2 O described in ASTM F3502 for Barrier Face Coverings . However, what breathing resistance may be optimal would likely depend on the age range the mask design is indicated for. Bench top studies: Using 3D-printed child manikins to assess fit across a broader spectrum of diverse anthropometric features. Breathability: Fit-testing performed on children with various brands of facemasks for determining what nominal fit-factor may offer reasonable protection to children and to assess if the brand-to-brand differences in pressure drop across pediatric facemasks are clinically meaningful. When making such assessments it would be important to first fully characterize the mask-brand used for filtration, pressure drop, as well as assess the performance of nose-bridge strips and ear loops. Additionally, development of pediatric facemasks that can be used by those younger than 4 years, or older than 12 years old. As well as development of a stratified optimal pressure drop range for various pediatric age groups including 2–4 years, 4–12 years, as well as those above 12 years of age. Inclusion of Various Ethnicities and Diversity: We did not consider various ethnicities due to data constraints, future research should strive to incorporate a more diverse demographic. This inclusivity would enhance the generalizability of findings and ensure that pediatric facemasks are evaluated across a spectrum of ethnic backgrounds. Assessment of the difficulties around pediatric mask-donning for children who may have developmental challenges or lung diseases (e.g. asthma). These studies should also include assessment of typical flow rates for diseased lungs so the masks designed can be tested at relevant flow rates. Long-Term Wear Effects: Investigating the prolonged use of pediatric facemasks among children could offer insights into the long-term effects, comfort, and potential challenges associated with extended wear. This aspect is particularly relevant in scenarios where continuous mask usage is required, such as in school settings. Activity Levels and User Experience: Exploring the impact of different activity levels on mask performance, and subjective experiences of children would provide valuable insights. Understanding how masks perform during various physical activities can guide the development of masks tailored to the diverse needs of active children, ensuring both protection and comfort. Considering factors such as comfort, breathability, and overall satisfaction, would contribute to increased compliance. Impact of Environmental Conditions: Considering the influence of environmental conditions, such as humidity and temperature, on mask performance would provide valuable information for ensuring effectiveness in various real-world situations. Incorporation of Patient-Specific Factors: Assessing how patient-specific factors, such as respiratory conditions or facial anatomy variations (beards for adolescents, injuries), may influence mask performance is an avenue for future exploration. This personalized approach could contribute to the development of more tailored and effective pediatric facemasks. Limited Range of Testing Exercises: Our study focused on assessing facemask fit based on specific breathing exercises (normal, deep breathing at 30 and 45 LPM). While these exercises provided valuable insights, it’s crucial to note that respirator fit testing typically performed on adults involves a more diverse range of movements, such as head turns, up-and-down motions, and talking , and is itself not intended to fully reproduce motions of subjects in a true Workplace Protection Factor study. The absence of these additional movements in our assessment could impact the findings. Individual Variability: Despite our efforts to cover a spectrum of age groups using different pediatric headforms, human facial features vary widely among individuals. Factors like ethnicity, age, gender, and facial dimensions significantly influence facemask fit and breathing resistance. Our study’s reliance on specific headforms may not fully encapsulate this diversity in the pediatric population. Simplifications in Skin Thickness: Because of lack of information and to simplify our methodology we didn’t incorporate variable skin thickness in our headforms. Although our previous research on adults suggested that these simplifications didn’t significantly affect the results when compared to adult N95 respirator fit testing, it’s essential to recognize that pediatric facial structures might respond differently. The absence of variable skin thickness might influence the accuracy of our fit-testing measurements. Given the simplifications in our study, our quantitative fit results should be interpreted with caution. Breathing Simulation: While we did not use breathing simulator extensively, the limited studies we conducted demonstrated similarity between constant and oscillatory flow rates. Ethnic Diversity: Various ethnicities were not considered due to a lack of data. While not studied in this context, the protocols described can still be used by modifying headform measurements to further our understanding of the impact of various racial ethnicities on fit. Breathability in Diseased Lungs: What amount of breathing resistance would be tolerable for children with asthma or other conditions was not studied. Leveraging our validated adult manikin fit-test method adapted for the pediatric population, we developed a methodology for evaluating the fit factor and breathing resistance of cleared pediatric facemasks. We then assessed four pediatric facemask brands available in the U.S. market, across 2–14 year old pediatric manikins to provide insights on how well pediatric facemasks are likely to perform in real-world situations and future considerations. Key findings emphasize the necessity of a comprehensive evaluation covering all aspects of pediatric facemasks. Filtration efficiency (in absence of any leaks) was consistently high (>80%) for majority of the brands even at relatively high flow rates of 45 LPM. Brands exhibited substantial differences in pressure drop, with some brands surpassing pressure drop limit of 2 mmH 2 0 at high flow rates that may lead to discomfort for the child wearer. Fit may not correlate with high filtration efficiency, underscoring the critical role of meticulous design in ensuring optimal fit, particularly for nose-clips. Our findings also highlighted the need for developing facemasks for those below 4 years as well as above 12 years of age. S1 File (DOCX) |
Heart rate variability is enhanced during mindfulness practice: A randomized controlled trial involving a 10-day online-based mindfulness intervention | 38f74fe7-77cc-4bc7-a998-1dafbcce5f9a | 7746169 | Physiology[mh] | Mindfulness practice has been framed as a technique that may promote well-being, which to some extend has been scientifically demonstrated through studies showing reduced self-reported stress (e.g. ) and improved self-reported sleep quality . However, mindfulness has increasingly come under scrutiny in terms of difficulties with defining mindfulness, and for lacking important methodological issues for interpreting results from investigations of mindfulness and its purported effects . In line with this criticism, the majority of studies on the issue of stress reduction and sleep quality have assessed mindfulness using self-report measures . However, in the nascent field of wearable technology there are purported ‘objective’ physiological tools available to measure stress and sleep quality . For example, heart rate variability (HRV) provides a powerful tool for observing the interplay between the sympathetic and parasympathetic nervous system . There have been some, but limited, research (for reviews see ) showing that mindfulness exerts beneficial effects on the cardiovascular system . The majority of these studies have focused on acute changes from being in a ‘mindful state’, while some studies cited above have investigated changes in resting baseline HRV between long-term mindfulness practitioners and novices (i.e. chronic changes). Investigations into the immediate physiological effects of mindfulness practice have revealed increased activity in HRV . Also, long-term mindfulness retreats have been shown to increase HRV . The interpretations of such increases in HRV and dominance of the parasympathetic nervous system (PNS) during mindfulness may partly be caused by changes in respiration which is modulated by the vagus nerve , and that respiration via awareness of breathing is central to mindfulness practice . Indeed, studies have demonstrated that respiration rate is decreased and the HRV response increased during mindfulness . Thus, it may be that respiratory rate should be considered as a metric reflecting decreased sympathetic drive during formal mindfulness practice. This raises an open question: Is the respiratory component only present during formal mindfulness practice (i.e. an acute state-dependent effect ) or is it a trait-dependent effect emerging in the course of practicing mindfulness over time (i.e. a chronic effect )? The present study The overall goal of the present study was to investigate the purported effects of mindfulness in a naturalistic setting as opposed to a lab-based environment through the lens of HRV, while at the same time examine the distinction between acute and more chronic HRV changes arising from mindfulness practice. To meet these experimental goals, we designed a study that tried to bifurcate acute and chronic effects of mindfulness practice. We employed a fully randomized 10-day online-based longitudinal mindfulness intervention, whereby we controlled for practice effects with an active-control group as well as a non-intervention control group in the context of continuous HRV measurement. In that cross-sectional studies cannot demonstrate causality, and wait-list designs are confounded by unmatched practice-effects and efforts , we decided to employ a longitudinal design involving music listening as an active-control intervention with similar practice duration and demand characteristics as the mindfulness group (see for a description of the two active interventions). As previous studies in mindfulness have shown increased attentional control arising from mindfulness practice presumably through interoceptive nonjudgmental awareness , we chose music as an active-control intervention which we expected would deemphasize these elements, thereby isolating the components of action in mindfulness practice. Employing HRV to track cardiovascular effects of stress It is generally assumed that HRV is a measure of beat-to-beat variability in heart rate (HR) that is mediated by the autonomic nervous system (ANS). The sympathetic nervous system (SNS) increases the heart’s contraction rate and force (cardiac output) and decreases HRV, which is needed during exercise and mentally or physically stressful situations. Conversely, the PNS slows the heart rate and increases HRV to restore homeostasis. This natural interplay between these two systems allow the heart to quickly respond to different situations and needs based on the context . The root mean square of successive differences between normal heartbeats (RMSSD) is considered to represent the beat-to-beat variance in heart rate (HR) and is the primary time-domain measure used to compute the vagally mediated changes reflected in HRV . The primary frequency-domain measure is the high frequency HRV (HF-HRV) component (0.15 to 0.40 Hz) which estimates inhibitory vagally induced PNS input and LF/HF ratios . We report both results from time-domain measures (RMSSD) and frequency-domain measures (HF-HRV and LF/HF ratios) whilst also summarizing additional time- and frequency domain measures (see and Tables). Chronic cardiovascular effects of mindfulness practice To assess chronic effects of mindfulness practice on the underlying HRV response, we asked participants in the 3 groups to initially complete 2-days (48 hours) of continuous HRV measurement, which constituted a pre-intervention chronic phase (see for a description of the definitions of chronic and acute phase measurements). A similar procedure was implemented after completion of the 10-day interventions, that is participants belonging to the 3 groups were asked to discontinue the practice during the 48-hour HRV measurement session, which constituted a post-intervention chronic phase. This setup allowed us to probe chronic effects of mindfulness practice on the HRV response in that no formal mindfulness practice took place neither in the pre or the post intervention sessions. We expected the active control group would not show an effect on respiration rate that may be a metric reflecting decreased sympathetic drive during formal mindfulness practice as mentioned above, and therefore may not have an impact on chronic HRV effects in the active control intervention or in the non-intervention control group. This aspect of the experimental design enabled us to isolate the components of action in mindfulness practice. In addition, based on findings showing that mindfulness reduces self-perceived stress , our first hypothesis (H1) was that mindfulness practice would increase the HRV response during daytime in the mindfulness group post-training compared to pre-training and across groups. Acute cardiovascular effects of mindfulness practice To assess acute effects of mindfulness practice on the underlying HRV response, we measured HRV in the two active intervention groups during each of the 10 daily mindfulness or music sessions, which constituted daily acute phases of HRV across the intervention period. This allowed us to track the development of mindfulness skills relative to the music group in terms of changes in the HRV slope over the 10-day time course. The instructions for the practice of mindfulness involves intentionally directing attention to one’s experience in the present moment . This practice entails frequently becoming distracted and returning the attention to the present moment, by centering awareness on present moment experiences and thereby enhancing attentional capacity. Novice practitioners often experience that mindfulness practice entail frequent distractions and that intentional focus has wandered . Given that the participants were novices to the practice of mindfulness, we expected that this should be reflected as practice effects in the mindfulness group. Specifically, as mindfulness practice over the 10-day time course would reflect a practice effect and thereby increase the HRV response, our second hypothesis (H2) was that the HRV response in the mindfulness group would be significantly elevated over the 10-day practice period. Furthermore, based on previous findings showing that respiration rate is decreased and the HRV response increased during mindfulness even without instructions to alter (i.e. slow) breathing , our third hypothesis (H3) was that mindfulness practice would decrease respiration rate in the acute practice phase and not in the chronic phase. Attenuated respiration rate or longer exhalations relative to inhalations, often seen in mindfulness practice , exert immediate physiological effects caused by parasympathetic activation, such as decreased oxygen consumption, decreased heart rate and blood pressure, and increased HRV . As such, we sought in H3 to address whether slowed respiration would be present exclusively during formal mindfulness practice (i.e. acute phase) or whether reduced respiration was also present outside of formal mindfulness sessions (i.e. chronic phase). Cardiovascular effects of sleep as a function of mindfulness practice Our experimental setup furthermore enabled us to address the effects of mindfulness on sleep quality. Sleep is a fundamental part of life, and serves as a biological investment associated with growth, repair, and maintenance of bodily functions . Poor sleep is associated with increased risk of cardiovascular disease and associated with mood and anxiety symptomatology . As sleep exerts an effect on HRV , studies have associated poor sleep quality with elevated sympathetic activity and suppressed parasympathetic activity . Based on numerous findings showing that mindfulness exerts a positive effect on self-perceived sleep quality , our fourth hypothesis (H4) was that mindfulness practice would increase the HRV response during sleep in the mindfulness group post-training compared to pre-training and across groups. Finally, we collected self-report data from the Perceived Stress Scale (PSS) , the Mindfulness Attention Awareness Scale (MAAS) and the D3 Sleep Quality Index (D3SQI) to access differences across groups. In addition we analyzed home practice adherence data explicitly controlling for practice effects with an active-control group to probe if mindfulness practice dose-response impacted the HRV response.
The overall goal of the present study was to investigate the purported effects of mindfulness in a naturalistic setting as opposed to a lab-based environment through the lens of HRV, while at the same time examine the distinction between acute and more chronic HRV changes arising from mindfulness practice. To meet these experimental goals, we designed a study that tried to bifurcate acute and chronic effects of mindfulness practice. We employed a fully randomized 10-day online-based longitudinal mindfulness intervention, whereby we controlled for practice effects with an active-control group as well as a non-intervention control group in the context of continuous HRV measurement. In that cross-sectional studies cannot demonstrate causality, and wait-list designs are confounded by unmatched practice-effects and efforts , we decided to employ a longitudinal design involving music listening as an active-control intervention with similar practice duration and demand characteristics as the mindfulness group (see for a description of the two active interventions). As previous studies in mindfulness have shown increased attentional control arising from mindfulness practice presumably through interoceptive nonjudgmental awareness , we chose music as an active-control intervention which we expected would deemphasize these elements, thereby isolating the components of action in mindfulness practice.
It is generally assumed that HRV is a measure of beat-to-beat variability in heart rate (HR) that is mediated by the autonomic nervous system (ANS). The sympathetic nervous system (SNS) increases the heart’s contraction rate and force (cardiac output) and decreases HRV, which is needed during exercise and mentally or physically stressful situations. Conversely, the PNS slows the heart rate and increases HRV to restore homeostasis. This natural interplay between these two systems allow the heart to quickly respond to different situations and needs based on the context . The root mean square of successive differences between normal heartbeats (RMSSD) is considered to represent the beat-to-beat variance in heart rate (HR) and is the primary time-domain measure used to compute the vagally mediated changes reflected in HRV . The primary frequency-domain measure is the high frequency HRV (HF-HRV) component (0.15 to 0.40 Hz) which estimates inhibitory vagally induced PNS input and LF/HF ratios . We report both results from time-domain measures (RMSSD) and frequency-domain measures (HF-HRV and LF/HF ratios) whilst also summarizing additional time- and frequency domain measures (see and Tables).
To assess chronic effects of mindfulness practice on the underlying HRV response, we asked participants in the 3 groups to initially complete 2-days (48 hours) of continuous HRV measurement, which constituted a pre-intervention chronic phase (see for a description of the definitions of chronic and acute phase measurements). A similar procedure was implemented after completion of the 10-day interventions, that is participants belonging to the 3 groups were asked to discontinue the practice during the 48-hour HRV measurement session, which constituted a post-intervention chronic phase. This setup allowed us to probe chronic effects of mindfulness practice on the HRV response in that no formal mindfulness practice took place neither in the pre or the post intervention sessions. We expected the active control group would not show an effect on respiration rate that may be a metric reflecting decreased sympathetic drive during formal mindfulness practice as mentioned above, and therefore may not have an impact on chronic HRV effects in the active control intervention or in the non-intervention control group. This aspect of the experimental design enabled us to isolate the components of action in mindfulness practice. In addition, based on findings showing that mindfulness reduces self-perceived stress , our first hypothesis (H1) was that mindfulness practice would increase the HRV response during daytime in the mindfulness group post-training compared to pre-training and across groups.
To assess acute effects of mindfulness practice on the underlying HRV response, we measured HRV in the two active intervention groups during each of the 10 daily mindfulness or music sessions, which constituted daily acute phases of HRV across the intervention period. This allowed us to track the development of mindfulness skills relative to the music group in terms of changes in the HRV slope over the 10-day time course. The instructions for the practice of mindfulness involves intentionally directing attention to one’s experience in the present moment . This practice entails frequently becoming distracted and returning the attention to the present moment, by centering awareness on present moment experiences and thereby enhancing attentional capacity. Novice practitioners often experience that mindfulness practice entail frequent distractions and that intentional focus has wandered . Given that the participants were novices to the practice of mindfulness, we expected that this should be reflected as practice effects in the mindfulness group. Specifically, as mindfulness practice over the 10-day time course would reflect a practice effect and thereby increase the HRV response, our second hypothesis (H2) was that the HRV response in the mindfulness group would be significantly elevated over the 10-day practice period. Furthermore, based on previous findings showing that respiration rate is decreased and the HRV response increased during mindfulness even without instructions to alter (i.e. slow) breathing , our third hypothesis (H3) was that mindfulness practice would decrease respiration rate in the acute practice phase and not in the chronic phase. Attenuated respiration rate or longer exhalations relative to inhalations, often seen in mindfulness practice , exert immediate physiological effects caused by parasympathetic activation, such as decreased oxygen consumption, decreased heart rate and blood pressure, and increased HRV . As such, we sought in H3 to address whether slowed respiration would be present exclusively during formal mindfulness practice (i.e. acute phase) or whether reduced respiration was also present outside of formal mindfulness sessions (i.e. chronic phase).
Our experimental setup furthermore enabled us to address the effects of mindfulness on sleep quality. Sleep is a fundamental part of life, and serves as a biological investment associated with growth, repair, and maintenance of bodily functions . Poor sleep is associated with increased risk of cardiovascular disease and associated with mood and anxiety symptomatology . As sleep exerts an effect on HRV , studies have associated poor sleep quality with elevated sympathetic activity and suppressed parasympathetic activity . Based on numerous findings showing that mindfulness exerts a positive effect on self-perceived sleep quality , our fourth hypothesis (H4) was that mindfulness practice would increase the HRV response during sleep in the mindfulness group post-training compared to pre-training and across groups. Finally, we collected self-report data from the Perceived Stress Scale (PSS) , the Mindfulness Attention Awareness Scale (MAAS) and the D3 Sleep Quality Index (D3SQI) to access differences across groups. In addition we analyzed home practice adherence data explicitly controlling for practice effects with an active-control group to probe if mindfulness practice dose-response impacted the HRV response.
Participants A total of 99 healthy volunteers participated in the study. 9 participants either dropped out or exhibited >10% missing data in the HRV-pre/post measurements (3 participants in the mindfulness group; 4 participants in the music group; 2 participants in the control group). Thus, the total number of participants from which data could be collected was 30 in the mindfulness group, 30 in the music group and 30 in the control group. Age and gender distributions are listed in . Recruitment Recruitment for the current study involved online-based advertisement campaigns through the University of Southern Denmark’s Facebook-page. The study was framed as a stress reduction study. Recruitment information furthermore informed that that the study involved either a mindfulness, music or a non-intervention control group lasting 10 days with a required 20–30 min. of daily training using an app-based platform (either mindfulness or music). In addition, recruitment information included that participants would be assigned to one of the three groups in a random manner, which eliminated any self-selection bias across the groups. Participants were informed that they in addition to one of the two intervention training-apps (either mindfulness or music) would be required to complete questionnaires during the intervention period. The next stage of the recruitment process involved that interested participants were provided with written information specifying the study’s logistics and requirements. After having agreed to the study requirements in writing, participants were invited to a meeting in which each participant individually received verbal information about the physiological recording procedure and information about when to fill in the questionnaires. This information included that participants at any time during the study had the option to discontinue their participation in the study. Participants were informed that the app-based platforms (i.e. the mindfulness and music interventions) utilized in the study ran on both Android and IOS, and thus required that participants had access to a smartphone for the study duration. After this information was provided to participants, they were given an option to ask questions about the study before being asked to sign the consent form. Following consenting to the study, participants were informed and were visually shown the physiological recording equipment and briefed regarding the experimental procedures. They received this information both verbally and in writing (handouts). Exclusion criteria were previous experience with mindfulness meditation, and current psychiatric illness or psychiatric medication intake or not owning a smartphone. Inclusion criteria required that all participants were between 21 and 60 years of age, and interest in receiving a free stress reduction intervention. Participants received monetary compensation for their participation in the study corresponding to DKK 400 (approximately USD 60). All procedures were conducted in accordance with the local ethical committee (Videnskabsetisk Komité for Region Syddanmark–Ethics approval ID S-20170199). Experimental procedures The randomization sequence was determined after study recruitment but before study launch. Specifically, participants were allocated to either the mindfulness, active-control music or non-intervention control group in a random manner. The 99 participants who volunteered for the study during the recruitment period (which took place from November to December 2019) were randomized into one of the 3 groups. This randomization procedure ensured that the data collection period (which took place from January 2020 to August 2020) was spread out across the 3 groups. Participants were not informed about group allocation until arrival to the lab for HRV measurement. Sequence generation and randomization was performed by the research team, who were not formally blinded to group allocation. Participants were informed in the lab regarding the procedures related to the 10-day intervention. Participants were given instructions about the 2-day/48 hours continuous HRV measurement that would take place prior to initiating the active intervention (mindfulness or music listening). Prior to the HRV measurement, participants were informed to refrain from alcohol and nicotine in order to avoid known influences of these factors on autonomic activity . Participants were instructed to not to engage in intense physical activity for the 48-hour period but were otherwise asked to maintain their daily and nightly routines. Both the pre and post measurement periods were conducted on weekdays. Following the 48-hour pre-measurement, the participants were instructed to initiate the 10-day mindfulness or music intervention. During the daily mindfulness or music sessions in the course of the 10-day intervention, participants’ HRV response were recorded. Upon completion of the 10-day intervention, participants completed another 48-hour HRV measurement. The resulting time course containing pre and post measurements and data from practice sessions for each participant was extracted from the HRV-monitor upon completion of the study and was processed for further analysis (see Physiological measures below). Furthermore, during the visit to the lab, participants in the three groups were provided with oral and written instructions for usage of the HRV-monitor that was employed in the study. Having received practice and demonstration of montage of the electrodes and HRV-monitor, participants in the two active interventions (i.e. mindfulness and music) were instructed in how to complete the daily practice session at home. Specifically the instructions included that during the daily sessions (mindfulness or music listening) participants were asked to sit in an upright position on a chair or on a cushion quietly by themselves and follow the guided mindfulness session (i.e. mindfulness group) or listen to the music (i.e. music group) for the entire duration of the session. All participants subjectively recorded home practice using a paper logs that they were provided with by the research team (see Compliance data below). Participants were instructed to initiate HRV recording 5–10 min prior to initiating the daily sessions to allow for calibration, and furthermore asked to complete the daily practice sessions at approximately the same time (between 8am-6pm) and not to engage in intense physical activity approximately 2–3 hours before the session. The acute cardiovascular effects were defined and operationalized for the purpose of this study as HRV measurement phases during which participants formally practiced mindfulness or were listening to music. HRV was captured and time-locked using the cross-checked timestamps derived from the training apps (see Compliance data and Interventions : Mindfulness and music below). This entailed a dataset of 10 consecutive daily time courses with a duration of 20 min for the initial 5 days and 30 min for the last 5 days for each participant where they practiced either mindfulness or were listening to music. By contrast the chronic cardiovascular effects were defined and operationalized as HRV measurement phases conducted either at baseline (i.e. pre) or following (i.e. post) the 10-day intervention. Importantly, participants were instructed not to practice mindfulness or listen to music for the duration of these measurement periods. This entailed a continuous 48-hour measurement phase for each participant both pre and post intervention. Note that the 48-measurement phases were binned in segments according to the diurnal rhythm (see Physiological measures below). These measurement phases were initiated immediately before the intervention and the following day after completion of the intervention. Interventions: Mindfulness and music Mindfulness intervention The mindfulness intervention consisted of a 10-day app-based program provided by Headspace ( https://www.headspace.com/ ). Participants did not receive an introductory session to the mindfulness or music programs but were provided with written instructions related to installation of the training app and usage for the 10-day intervention. The content of the training was based on well-established concepts and practices within the mindfulness literature and entailed daily practice in guided mindfulness meditation, with instructions delivered through short animated videos and sound files in the app. The training program centered on mindfulness meditation, which included focusing on a selected object (i.e. the body or the breath), monitoring the activity of the mind, noticing mind-wandering, and developing a non-judgmental orientation toward one’ s experience (i.e., equanimity). The mindfulness group was instructed to follow an introductory course to mindfulness in the Headspace app with two levels, namely ‘Basics I-II’. The program entailed that participants completed ‘Basics I’ for the initial 5 days with a daily duration of 20 min, and the ‘Basics II’ program for the remaining 5 days with a daily duration of 30 min. The Headspace app has been applied in previous research demonstrating effects pertaining to stress-relief such as overt self-reported stress , self-reported well-being and self-reported mindfulness . By examining user data provided by the app developers on how much time each subject had spent meditating with the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study. Music intervention We employed an active-control condition (listening to music), which we also made available using an app-based platform to structurally match the active-control intervention on content not specific to mindfulness, while in addition controlling for nonspecific treatment effects such as placebo, social support, and demand characteristics . The music used in the study was instrumental music and there were in total 60 music compositions. The music was organized according to different playlists in the app, specifically ‘focus’, ‘binaural beats’, and ‘piano’. Each of the 3 playlists consisted of 20 tracks with a duration of between 2 to 4 min. Participants were instructed to freely select which playlists to listen to and they were free to listen to any or all 3 playlists during the study. The daily listening requirement was 20 min for the initial 5 days, and 30 min for the remaining 5 days for the music group to match and allow balanced comparison across the mindfulness intervention group. By examining user data provided by the app developers on how much time each subject had spent listening to the music available in the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study. Non-intervention control The non-intervention control group were asked to maintain their daily and nightly routines for the 10-day period between the pre and post 48-hour HRV measurement period and were explicitly asked not to perform mindfulness or listen to music during this period. Acute physiological data was not collected from the non-intervention group in that there was no uniform activity level (as opposed to the two active intervention groups) that this group was asked to perform. Compliance data Participants in both app-based intervention groups were instructed to follow the programs in full to receive the maximum benefit of the interventions and complete the daily training/listening requirements at any time during the day that fitted with their schedule from 8am– 6pm. Participants were provided with a log in which they were asked to fill in the time during the day when they completed the daily practice. It was emphasized that self-reports should accurately reflect their practice so as to discourage dishonest reporting. The log was handed over to the experimenters upon completion of the interventions. Both apps (i.e. mindfulness and music) contained a function that tracked the timestamps during which time the participants used the app. This usage information was available to participants to keep track of their daily usage during the study. In addition, the time course containing each completed practice session for each participant was extracted from the apps upon completion of the study by the experimenters and was processed for further analysis. Specifically, the usage data generated from the app was cross-checked with the self-report logs for each participant. The physiological data was adjusted and time-locked with the onset timestamp provided in the apps. We included data in which participants completed >80% of a practice session. The mean practice data is reported in the Results section . Psychological measures 3 questionnaires were employed pre and post the 10 days intervention using an app-based platform ( https://www.datacubed.com/ ). The pre-questionnaires were filled in by participants prior to initiating the 48-hour HRV pre-measurement, while the post-questionnaires were filled in after completion of the 48-hour HRV post-measurement. However due to an error in the app ( https://www.datacubed.com/ ), datapoints from 13 participants (3 in the mindfulness group; 4 in the music group; 6 in the control group) were not captured and were thus lost. Initially, all participants were asked to complete the PSS . The PSS is a 10-item scale designed to measure the perception of stress. Furthermore, all participants were asked to complete the MAAS . The MAAS is a 15-item scale designed to assess dispositional mindfulness. Finally, participants were asked to complete the D3SQI, which is a 34-item questionnaire. The gold standard for assessment of sleep quality is polysomnography , however the Pittsburgh Sleep Quality Index (PSQI) has been demonstrated to have cardiovascular prognostic value , and as the D3SQI has been constructed to parallel the PSQI, the D3SQI was thus chosen to be applied in the current study to assess sleep quality. The mean data from the participants psychological measures for the three intervention groups are reported in . Physiological measures Physiological acquisition HR was recorded as beat-to-beat intervals with the Firstbeat Bodyguard II HRV monitor (Firstbeat Technologies Ltd., Jyväskylä, Finland) that have been previously applied in research and validated with standard physiological monitoring systems used in clinical and laboratory settings . Bodyguard 2 is a wearable lightweight monitor attached on the chest using two ECG electrodes (Ambu Ltd., Ballerup, Denmark) for measuring 24h HRV (RR-intervals) including respiratory measures. Physiological signal processing The HRV measurements conducted in this study were performed according to the guidelines of the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology . HRV allows to quantify the change in the time intervals between consecutive heart beats and refer to an index of SNS activity and PNS activity at any given time . Quantification of HRV parameters can broadly be classified into time and frequency domain measures. The primary time-domain measure is RMSSD and reflects the beat-to-beat variance in heart rate (HR). RMSSD is typically used to estimate vagally mediated changes reflected in HRV . RMSSD is reported in milliseconds (ms). The primary frequency-domain measure is high frequency HRV (HF-HRV) component (0.15 to 0.40 Hz) which estimates inhibitory vagally induced PNS input and LF/HF ratios. In following these standardized procedures, we report both RMSSD, HF-HRV and LF/HF ratios in this study. Furthermore, to gain a comprehensive insight of the ANS adaptation to the mindfulness practice employed in the current study, we also report other measures in the temporal and frequency domain (see ). All raw physiological data was processed for time- and frequency-domain parameters using the Kubios analysis software (version 3.4). The recorded data was imported to Kubios to calculate R-R intervals and associated variability . Examination of the electrocardiogram data (ECG) ensured that the autonomic R-wave detection algorithm had been performed satisfactorily. Artifact removal for the HRV was performed manually using the artifact correction tool to detect R-R intervals provided by the Kubios software. When correction was applied, detected artifact beats were replaced using cubic spline interpolation. Spectrum analysis was computed using the Fast Fourier Transformation procedure provided by the Kubios software. Because of the skewed distribution the HRV variables were log transformed prior to exposing the data to statistical analysis. The HRV data was recorded continuously at the pre and post time-points for the 48-hour pre-measurement and 48-hour post-intervention measurement. The time course was broken up into 24-hour segments and calculated as daytime (16 hours) and nighttime (8 hours) means on a participant-by-participant basis. The data was segmented according to estimated sleep (8 hours) and wake hours (16 hours) across participants. Due to these extensive time courses, the HRV activity reported in this study is to be considered a combination of SNS activity and PNS activity at any given time . Statistical analysis All data is presented in mean ± SD unless otherwise stated. The data from the chronic phase (i.e. pre-post) were analyzed separately from the acute data (i.e. each of the 10 daily mindfulness or music sessions). Assumptions of normal distribution and sphericity of data were checked accordingly. Greenhouse-Geisser correction to the degrees of freedom was applied when violations to sphericity were present. Mixed 2 × 3 ANOVAs were used to assess if there were differences pre and post intervention on the groups’ mean RMSSD, HS-HRV and LF/HF ratios during day or night and their respiration rate during day or night. Significant interaction effects from the mixed ANOVA were followed up with t tests. For the acute data a mixed 10 × 2 ANOVA were used to assess if the two active interventions had an acute effect on the groups’ RMSSD, HS-HRV, LF/HF ratios and respiration rate during the 10 intervention days. Significance was set at 0.05 (2-tailed) for all analyses. Pearson correlation analysis was conducted to investigate practice dose-response and change in the mindfulness and music groups’ RMSSD from pre to post measurement. Pearson correlations (R) were considered small = 0.1, medium = .24 and large = .37 as suggested by Cohen . The effect sizes for the mixed measures ANOVAs were calculated as partial eta squared ( η 2 p ), using small = 0.02, medium = 0.13 and large = 0.26 interpretation for effect size . The effect sizes for the t tests were calculated as Cohen’s d using small = 0.2, moderate = 0.5 and large effect 0.8 also suggested by . All data analysis was conducted using the statistical packages for social science (SPSS version 26).
A total of 99 healthy volunteers participated in the study. 9 participants either dropped out or exhibited >10% missing data in the HRV-pre/post measurements (3 participants in the mindfulness group; 4 participants in the music group; 2 participants in the control group). Thus, the total number of participants from which data could be collected was 30 in the mindfulness group, 30 in the music group and 30 in the control group. Age and gender distributions are listed in .
Recruitment for the current study involved online-based advertisement campaigns through the University of Southern Denmark’s Facebook-page. The study was framed as a stress reduction study. Recruitment information furthermore informed that that the study involved either a mindfulness, music or a non-intervention control group lasting 10 days with a required 20–30 min. of daily training using an app-based platform (either mindfulness or music). In addition, recruitment information included that participants would be assigned to one of the three groups in a random manner, which eliminated any self-selection bias across the groups. Participants were informed that they in addition to one of the two intervention training-apps (either mindfulness or music) would be required to complete questionnaires during the intervention period. The next stage of the recruitment process involved that interested participants were provided with written information specifying the study’s logistics and requirements. After having agreed to the study requirements in writing, participants were invited to a meeting in which each participant individually received verbal information about the physiological recording procedure and information about when to fill in the questionnaires. This information included that participants at any time during the study had the option to discontinue their participation in the study. Participants were informed that the app-based platforms (i.e. the mindfulness and music interventions) utilized in the study ran on both Android and IOS, and thus required that participants had access to a smartphone for the study duration. After this information was provided to participants, they were given an option to ask questions about the study before being asked to sign the consent form. Following consenting to the study, participants were informed and were visually shown the physiological recording equipment and briefed regarding the experimental procedures. They received this information both verbally and in writing (handouts). Exclusion criteria were previous experience with mindfulness meditation, and current psychiatric illness or psychiatric medication intake or not owning a smartphone. Inclusion criteria required that all participants were between 21 and 60 years of age, and interest in receiving a free stress reduction intervention. Participants received monetary compensation for their participation in the study corresponding to DKK 400 (approximately USD 60). All procedures were conducted in accordance with the local ethical committee (Videnskabsetisk Komité for Region Syddanmark–Ethics approval ID S-20170199). Experimental procedures The randomization sequence was determined after study recruitment but before study launch. Specifically, participants were allocated to either the mindfulness, active-control music or non-intervention control group in a random manner. The 99 participants who volunteered for the study during the recruitment period (which took place from November to December 2019) were randomized into one of the 3 groups. This randomization procedure ensured that the data collection period (which took place from January 2020 to August 2020) was spread out across the 3 groups. Participants were not informed about group allocation until arrival to the lab for HRV measurement. Sequence generation and randomization was performed by the research team, who were not formally blinded to group allocation. Participants were informed in the lab regarding the procedures related to the 10-day intervention. Participants were given instructions about the 2-day/48 hours continuous HRV measurement that would take place prior to initiating the active intervention (mindfulness or music listening). Prior to the HRV measurement, participants were informed to refrain from alcohol and nicotine in order to avoid known influences of these factors on autonomic activity . Participants were instructed to not to engage in intense physical activity for the 48-hour period but were otherwise asked to maintain their daily and nightly routines. Both the pre and post measurement periods were conducted on weekdays. Following the 48-hour pre-measurement, the participants were instructed to initiate the 10-day mindfulness or music intervention. During the daily mindfulness or music sessions in the course of the 10-day intervention, participants’ HRV response were recorded. Upon completion of the 10-day intervention, participants completed another 48-hour HRV measurement. The resulting time course containing pre and post measurements and data from practice sessions for each participant was extracted from the HRV-monitor upon completion of the study and was processed for further analysis (see Physiological measures below). Furthermore, during the visit to the lab, participants in the three groups were provided with oral and written instructions for usage of the HRV-monitor that was employed in the study. Having received practice and demonstration of montage of the electrodes and HRV-monitor, participants in the two active interventions (i.e. mindfulness and music) were instructed in how to complete the daily practice session at home. Specifically the instructions included that during the daily sessions (mindfulness or music listening) participants were asked to sit in an upright position on a chair or on a cushion quietly by themselves and follow the guided mindfulness session (i.e. mindfulness group) or listen to the music (i.e. music group) for the entire duration of the session. All participants subjectively recorded home practice using a paper logs that they were provided with by the research team (see Compliance data below). Participants were instructed to initiate HRV recording 5–10 min prior to initiating the daily sessions to allow for calibration, and furthermore asked to complete the daily practice sessions at approximately the same time (between 8am-6pm) and not to engage in intense physical activity approximately 2–3 hours before the session. The acute cardiovascular effects were defined and operationalized for the purpose of this study as HRV measurement phases during which participants formally practiced mindfulness or were listening to music. HRV was captured and time-locked using the cross-checked timestamps derived from the training apps (see Compliance data and Interventions : Mindfulness and music below). This entailed a dataset of 10 consecutive daily time courses with a duration of 20 min for the initial 5 days and 30 min for the last 5 days for each participant where they practiced either mindfulness or were listening to music. By contrast the chronic cardiovascular effects were defined and operationalized as HRV measurement phases conducted either at baseline (i.e. pre) or following (i.e. post) the 10-day intervention. Importantly, participants were instructed not to practice mindfulness or listen to music for the duration of these measurement periods. This entailed a continuous 48-hour measurement phase for each participant both pre and post intervention. Note that the 48-measurement phases were binned in segments according to the diurnal rhythm (see Physiological measures below). These measurement phases were initiated immediately before the intervention and the following day after completion of the intervention.
The randomization sequence was determined after study recruitment but before study launch. Specifically, participants were allocated to either the mindfulness, active-control music or non-intervention control group in a random manner. The 99 participants who volunteered for the study during the recruitment period (which took place from November to December 2019) were randomized into one of the 3 groups. This randomization procedure ensured that the data collection period (which took place from January 2020 to August 2020) was spread out across the 3 groups. Participants were not informed about group allocation until arrival to the lab for HRV measurement. Sequence generation and randomization was performed by the research team, who were not formally blinded to group allocation. Participants were informed in the lab regarding the procedures related to the 10-day intervention. Participants were given instructions about the 2-day/48 hours continuous HRV measurement that would take place prior to initiating the active intervention (mindfulness or music listening). Prior to the HRV measurement, participants were informed to refrain from alcohol and nicotine in order to avoid known influences of these factors on autonomic activity . Participants were instructed to not to engage in intense physical activity for the 48-hour period but were otherwise asked to maintain their daily and nightly routines. Both the pre and post measurement periods were conducted on weekdays. Following the 48-hour pre-measurement, the participants were instructed to initiate the 10-day mindfulness or music intervention. During the daily mindfulness or music sessions in the course of the 10-day intervention, participants’ HRV response were recorded. Upon completion of the 10-day intervention, participants completed another 48-hour HRV measurement. The resulting time course containing pre and post measurements and data from practice sessions for each participant was extracted from the HRV-monitor upon completion of the study and was processed for further analysis (see Physiological measures below). Furthermore, during the visit to the lab, participants in the three groups were provided with oral and written instructions for usage of the HRV-monitor that was employed in the study. Having received practice and demonstration of montage of the electrodes and HRV-monitor, participants in the two active interventions (i.e. mindfulness and music) were instructed in how to complete the daily practice session at home. Specifically the instructions included that during the daily sessions (mindfulness or music listening) participants were asked to sit in an upright position on a chair or on a cushion quietly by themselves and follow the guided mindfulness session (i.e. mindfulness group) or listen to the music (i.e. music group) for the entire duration of the session. All participants subjectively recorded home practice using a paper logs that they were provided with by the research team (see Compliance data below). Participants were instructed to initiate HRV recording 5–10 min prior to initiating the daily sessions to allow for calibration, and furthermore asked to complete the daily practice sessions at approximately the same time (between 8am-6pm) and not to engage in intense physical activity approximately 2–3 hours before the session. The acute cardiovascular effects were defined and operationalized for the purpose of this study as HRV measurement phases during which participants formally practiced mindfulness or were listening to music. HRV was captured and time-locked using the cross-checked timestamps derived from the training apps (see Compliance data and Interventions : Mindfulness and music below). This entailed a dataset of 10 consecutive daily time courses with a duration of 20 min for the initial 5 days and 30 min for the last 5 days for each participant where they practiced either mindfulness or were listening to music. By contrast the chronic cardiovascular effects were defined and operationalized as HRV measurement phases conducted either at baseline (i.e. pre) or following (i.e. post) the 10-day intervention. Importantly, participants were instructed not to practice mindfulness or listen to music for the duration of these measurement periods. This entailed a continuous 48-hour measurement phase for each participant both pre and post intervention. Note that the 48-measurement phases were binned in segments according to the diurnal rhythm (see Physiological measures below). These measurement phases were initiated immediately before the intervention and the following day after completion of the intervention.
Mindfulness intervention The mindfulness intervention consisted of a 10-day app-based program provided by Headspace ( https://www.headspace.com/ ). Participants did not receive an introductory session to the mindfulness or music programs but were provided with written instructions related to installation of the training app and usage for the 10-day intervention. The content of the training was based on well-established concepts and practices within the mindfulness literature and entailed daily practice in guided mindfulness meditation, with instructions delivered through short animated videos and sound files in the app. The training program centered on mindfulness meditation, which included focusing on a selected object (i.e. the body or the breath), monitoring the activity of the mind, noticing mind-wandering, and developing a non-judgmental orientation toward one’ s experience (i.e., equanimity). The mindfulness group was instructed to follow an introductory course to mindfulness in the Headspace app with two levels, namely ‘Basics I-II’. The program entailed that participants completed ‘Basics I’ for the initial 5 days with a daily duration of 20 min, and the ‘Basics II’ program for the remaining 5 days with a daily duration of 30 min. The Headspace app has been applied in previous research demonstrating effects pertaining to stress-relief such as overt self-reported stress , self-reported well-being and self-reported mindfulness . By examining user data provided by the app developers on how much time each subject had spent meditating with the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study. Music intervention We employed an active-control condition (listening to music), which we also made available using an app-based platform to structurally match the active-control intervention on content not specific to mindfulness, while in addition controlling for nonspecific treatment effects such as placebo, social support, and demand characteristics . The music used in the study was instrumental music and there were in total 60 music compositions. The music was organized according to different playlists in the app, specifically ‘focus’, ‘binaural beats’, and ‘piano’. Each of the 3 playlists consisted of 20 tracks with a duration of between 2 to 4 min. Participants were instructed to freely select which playlists to listen to and they were free to listen to any or all 3 playlists during the study. The daily listening requirement was 20 min for the initial 5 days, and 30 min for the remaining 5 days for the music group to match and allow balanced comparison across the mindfulness intervention group. By examining user data provided by the app developers on how much time each subject had spent listening to the music available in the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study. Non-intervention control The non-intervention control group were asked to maintain their daily and nightly routines for the 10-day period between the pre and post 48-hour HRV measurement period and were explicitly asked not to perform mindfulness or listen to music during this period. Acute physiological data was not collected from the non-intervention group in that there was no uniform activity level (as opposed to the two active intervention groups) that this group was asked to perform. Compliance data Participants in both app-based intervention groups were instructed to follow the programs in full to receive the maximum benefit of the interventions and complete the daily training/listening requirements at any time during the day that fitted with their schedule from 8am– 6pm. Participants were provided with a log in which they were asked to fill in the time during the day when they completed the daily practice. It was emphasized that self-reports should accurately reflect their practice so as to discourage dishonest reporting. The log was handed over to the experimenters upon completion of the interventions. Both apps (i.e. mindfulness and music) contained a function that tracked the timestamps during which time the participants used the app. This usage information was available to participants to keep track of their daily usage during the study. In addition, the time course containing each completed practice session for each participant was extracted from the apps upon completion of the study by the experimenters and was processed for further analysis. Specifically, the usage data generated from the app was cross-checked with the self-report logs for each participant. The physiological data was adjusted and time-locked with the onset timestamp provided in the apps. We included data in which participants completed >80% of a practice session. The mean practice data is reported in the Results section .
The mindfulness intervention consisted of a 10-day app-based program provided by Headspace ( https://www.headspace.com/ ). Participants did not receive an introductory session to the mindfulness or music programs but were provided with written instructions related to installation of the training app and usage for the 10-day intervention. The content of the training was based on well-established concepts and practices within the mindfulness literature and entailed daily practice in guided mindfulness meditation, with instructions delivered through short animated videos and sound files in the app. The training program centered on mindfulness meditation, which included focusing on a selected object (i.e. the body or the breath), monitoring the activity of the mind, noticing mind-wandering, and developing a non-judgmental orientation toward one’ s experience (i.e., equanimity). The mindfulness group was instructed to follow an introductory course to mindfulness in the Headspace app with two levels, namely ‘Basics I-II’. The program entailed that participants completed ‘Basics I’ for the initial 5 days with a daily duration of 20 min, and the ‘Basics II’ program for the remaining 5 days with a daily duration of 30 min. The Headspace app has been applied in previous research demonstrating effects pertaining to stress-relief such as overt self-reported stress , self-reported well-being and self-reported mindfulness . By examining user data provided by the app developers on how much time each subject had spent meditating with the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study.
We employed an active-control condition (listening to music), which we also made available using an app-based platform to structurally match the active-control intervention on content not specific to mindfulness, while in addition controlling for nonspecific treatment effects such as placebo, social support, and demand characteristics . The music used in the study was instrumental music and there were in total 60 music compositions. The music was organized according to different playlists in the app, specifically ‘focus’, ‘binaural beats’, and ‘piano’. Each of the 3 playlists consisted of 20 tracks with a duration of between 2 to 4 min. Participants were instructed to freely select which playlists to listen to and they were free to listen to any or all 3 playlists during the study. The daily listening requirement was 20 min for the initial 5 days, and 30 min for the remaining 5 days for the music group to match and allow balanced comparison across the mindfulness intervention group. By examining user data provided by the app developers on how much time each subject had spent listening to the music available in the app, we could confirm that all participants showed acceptable adherence to the program (>80%). Participants were informed of this and consented to us gaining access to their user data before entering the study.
The non-intervention control group were asked to maintain their daily and nightly routines for the 10-day period between the pre and post 48-hour HRV measurement period and were explicitly asked not to perform mindfulness or listen to music during this period. Acute physiological data was not collected from the non-intervention group in that there was no uniform activity level (as opposed to the two active intervention groups) that this group was asked to perform.
Participants in both app-based intervention groups were instructed to follow the programs in full to receive the maximum benefit of the interventions and complete the daily training/listening requirements at any time during the day that fitted with their schedule from 8am– 6pm. Participants were provided with a log in which they were asked to fill in the time during the day when they completed the daily practice. It was emphasized that self-reports should accurately reflect their practice so as to discourage dishonest reporting. The log was handed over to the experimenters upon completion of the interventions. Both apps (i.e. mindfulness and music) contained a function that tracked the timestamps during which time the participants used the app. This usage information was available to participants to keep track of their daily usage during the study. In addition, the time course containing each completed practice session for each participant was extracted from the apps upon completion of the study by the experimenters and was processed for further analysis. Specifically, the usage data generated from the app was cross-checked with the self-report logs for each participant. The physiological data was adjusted and time-locked with the onset timestamp provided in the apps. We included data in which participants completed >80% of a practice session. The mean practice data is reported in the Results section .
3 questionnaires were employed pre and post the 10 days intervention using an app-based platform ( https://www.datacubed.com/ ). The pre-questionnaires were filled in by participants prior to initiating the 48-hour HRV pre-measurement, while the post-questionnaires were filled in after completion of the 48-hour HRV post-measurement. However due to an error in the app ( https://www.datacubed.com/ ), datapoints from 13 participants (3 in the mindfulness group; 4 in the music group; 6 in the control group) were not captured and were thus lost. Initially, all participants were asked to complete the PSS . The PSS is a 10-item scale designed to measure the perception of stress. Furthermore, all participants were asked to complete the MAAS . The MAAS is a 15-item scale designed to assess dispositional mindfulness. Finally, participants were asked to complete the D3SQI, which is a 34-item questionnaire. The gold standard for assessment of sleep quality is polysomnography , however the Pittsburgh Sleep Quality Index (PSQI) has been demonstrated to have cardiovascular prognostic value , and as the D3SQI has been constructed to parallel the PSQI, the D3SQI was thus chosen to be applied in the current study to assess sleep quality. The mean data from the participants psychological measures for the three intervention groups are reported in .
Physiological acquisition HR was recorded as beat-to-beat intervals with the Firstbeat Bodyguard II HRV monitor (Firstbeat Technologies Ltd., Jyväskylä, Finland) that have been previously applied in research and validated with standard physiological monitoring systems used in clinical and laboratory settings . Bodyguard 2 is a wearable lightweight monitor attached on the chest using two ECG electrodes (Ambu Ltd., Ballerup, Denmark) for measuring 24h HRV (RR-intervals) including respiratory measures. Physiological signal processing The HRV measurements conducted in this study were performed according to the guidelines of the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology . HRV allows to quantify the change in the time intervals between consecutive heart beats and refer to an index of SNS activity and PNS activity at any given time . Quantification of HRV parameters can broadly be classified into time and frequency domain measures. The primary time-domain measure is RMSSD and reflects the beat-to-beat variance in heart rate (HR). RMSSD is typically used to estimate vagally mediated changes reflected in HRV . RMSSD is reported in milliseconds (ms). The primary frequency-domain measure is high frequency HRV (HF-HRV) component (0.15 to 0.40 Hz) which estimates inhibitory vagally induced PNS input and LF/HF ratios. In following these standardized procedures, we report both RMSSD, HF-HRV and LF/HF ratios in this study. Furthermore, to gain a comprehensive insight of the ANS adaptation to the mindfulness practice employed in the current study, we also report other measures in the temporal and frequency domain (see ). All raw physiological data was processed for time- and frequency-domain parameters using the Kubios analysis software (version 3.4). The recorded data was imported to Kubios to calculate R-R intervals and associated variability . Examination of the electrocardiogram data (ECG) ensured that the autonomic R-wave detection algorithm had been performed satisfactorily. Artifact removal for the HRV was performed manually using the artifact correction tool to detect R-R intervals provided by the Kubios software. When correction was applied, detected artifact beats were replaced using cubic spline interpolation. Spectrum analysis was computed using the Fast Fourier Transformation procedure provided by the Kubios software. Because of the skewed distribution the HRV variables were log transformed prior to exposing the data to statistical analysis. The HRV data was recorded continuously at the pre and post time-points for the 48-hour pre-measurement and 48-hour post-intervention measurement. The time course was broken up into 24-hour segments and calculated as daytime (16 hours) and nighttime (8 hours) means on a participant-by-participant basis. The data was segmented according to estimated sleep (8 hours) and wake hours (16 hours) across participants. Due to these extensive time courses, the HRV activity reported in this study is to be considered a combination of SNS activity and PNS activity at any given time .
HR was recorded as beat-to-beat intervals with the Firstbeat Bodyguard II HRV monitor (Firstbeat Technologies Ltd., Jyväskylä, Finland) that have been previously applied in research and validated with standard physiological monitoring systems used in clinical and laboratory settings . Bodyguard 2 is a wearable lightweight monitor attached on the chest using two ECG electrodes (Ambu Ltd., Ballerup, Denmark) for measuring 24h HRV (RR-intervals) including respiratory measures.
The HRV measurements conducted in this study were performed according to the guidelines of the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology . HRV allows to quantify the change in the time intervals between consecutive heart beats and refer to an index of SNS activity and PNS activity at any given time . Quantification of HRV parameters can broadly be classified into time and frequency domain measures. The primary time-domain measure is RMSSD and reflects the beat-to-beat variance in heart rate (HR). RMSSD is typically used to estimate vagally mediated changes reflected in HRV . RMSSD is reported in milliseconds (ms). The primary frequency-domain measure is high frequency HRV (HF-HRV) component (0.15 to 0.40 Hz) which estimates inhibitory vagally induced PNS input and LF/HF ratios. In following these standardized procedures, we report both RMSSD, HF-HRV and LF/HF ratios in this study. Furthermore, to gain a comprehensive insight of the ANS adaptation to the mindfulness practice employed in the current study, we also report other measures in the temporal and frequency domain (see ). All raw physiological data was processed for time- and frequency-domain parameters using the Kubios analysis software (version 3.4). The recorded data was imported to Kubios to calculate R-R intervals and associated variability . Examination of the electrocardiogram data (ECG) ensured that the autonomic R-wave detection algorithm had been performed satisfactorily. Artifact removal for the HRV was performed manually using the artifact correction tool to detect R-R intervals provided by the Kubios software. When correction was applied, detected artifact beats were replaced using cubic spline interpolation. Spectrum analysis was computed using the Fast Fourier Transformation procedure provided by the Kubios software. Because of the skewed distribution the HRV variables were log transformed prior to exposing the data to statistical analysis. The HRV data was recorded continuously at the pre and post time-points for the 48-hour pre-measurement and 48-hour post-intervention measurement. The time course was broken up into 24-hour segments and calculated as daytime (16 hours) and nighttime (8 hours) means on a participant-by-participant basis. The data was segmented according to estimated sleep (8 hours) and wake hours (16 hours) across participants. Due to these extensive time courses, the HRV activity reported in this study is to be considered a combination of SNS activity and PNS activity at any given time .
All data is presented in mean ± SD unless otherwise stated. The data from the chronic phase (i.e. pre-post) were analyzed separately from the acute data (i.e. each of the 10 daily mindfulness or music sessions). Assumptions of normal distribution and sphericity of data were checked accordingly. Greenhouse-Geisser correction to the degrees of freedom was applied when violations to sphericity were present. Mixed 2 × 3 ANOVAs were used to assess if there were differences pre and post intervention on the groups’ mean RMSSD, HS-HRV and LF/HF ratios during day or night and their respiration rate during day or night. Significant interaction effects from the mixed ANOVA were followed up with t tests. For the acute data a mixed 10 × 2 ANOVA were used to assess if the two active interventions had an acute effect on the groups’ RMSSD, HS-HRV, LF/HF ratios and respiration rate during the 10 intervention days. Significance was set at 0.05 (2-tailed) for all analyses. Pearson correlation analysis was conducted to investigate practice dose-response and change in the mindfulness and music groups’ RMSSD from pre to post measurement. Pearson correlations (R) were considered small = 0.1, medium = .24 and large = .37 as suggested by Cohen . The effect sizes for the mixed measures ANOVAs were calculated as partial eta squared ( η 2 p ), using small = 0.02, medium = 0.13 and large = 0.26 interpretation for effect size . The effect sizes for the t tests were calculated as Cohen’s d using small = 0.2, moderate = 0.5 and large effect 0.8 also suggested by . All data analysis was conducted using the statistical packages for social science (SPSS version 26).
Demographical and behavioral effects display descriptive statistics with means and standard deviations for the three groups. A one-way ANOVA was conducted to investigate possible differences between the group’s descriptive statistics. There was no significant age difference between the groups (F(2,87) = .022, p = .97). Likewise, there was no significant difference in the music and mindfulness groups’ practice dose-response (paired t = -1.50, df = 51, p = .14). For the questionnaire data, at pre-intervention there were no significant differences between the three groups. A mixed ANOVA was used to inspect time (pre and post measurement) by condition (mindfulness, music and control) for the groups’ scores on MAAS, PSS and D3SQI questionnaires. For the MAAS there was a significant interaction between group and time (F(2,74) = 6.24; p = .003, η 2 p = .14). Follow up paired t test revealed that in the music ( p = .73) and control group ( p = .96) there were no significant changes in MAAS-score from pre to post measurement. However, in the mindfulness group there was a significant increase in MAAS-score from pre to post measurement (paired t = -3.9, df = 26, p = .001) indicating that the group’s subjective mindfulness level increased. For the PSS-questionnaire there was a significant interaction of time and condition (F(2,74) = 3.54: p = .034, η 2 p = .08). Follow up paired t tests revealed that both in the music ( p = .30) and control ( p = .85) groups there were no significant changes in PSS-score from pre to post measurement. However, in the mindfulness group there was a significant decrease in the PSS-score from pre to post measurement (paired t = 7.46, df = 26, p < 0.01) indicating significantly lower perceived stress for the mindfulness group. The questionnaire data on the D3SQI displayed no significant interaction between group and time ( p = .53) There was a significant effect of time (F(1,74) = 5.21; p = .025, η 2 p = .06). A follow up paired t-test showed that only the mindfulness group had a significant higher score on the D3SQI from baseline to post measurement ( paired t = -3.267, p = .003, df = 26), this was not the case for the music or the control group. Furthermore, there was a significant effect of group (F(2,74) = 3.90; p = .024, η 2 p = .09) with the mindfulness group showing a significant higher score on the D3SQI on post measurement than the two other groups. Chronic cardiovascular effects To address H1 we computed the mean daytime RMSSD for the three groups ( , left panel). A mixed ANOVA was used to inspect time (pre and post) by condition (music group, mindfulness group and control group) for the groups’ RMSSD controlling for age and gender. There was a significant interaction of time and group condition (F(2,84) = 6.19; p = .003, η 2 p = .12). Follow up paired t-test showed that in the mindfulness group there was a significantly higher mean daytime RMSSD from pre to post measurement (paired t = -4.41, df = 48, p < .001). There were no significant differences in the active control group ( p = .45) and the non-active control group exhibited a significant lower mean daytime RMSSD from pre to post (paired t = 2.79, df = 57, p = .007). We also computed the HF-HRV and the LF/HF ratio during daytime for the three groups controlling for age and gender. A mixed ANOVA did not reveal significant differences between group and time for HF-HRV (F(2,84) = 1.23, p = .37) or LF/HF ratio (F(2,84) = 1.42, p = .24). To address H4, the mean nighttime RMSSD for the three groups was calculated ( , right panel). A mixed ANOVA was employed to inspect time (pre and post) by condition (music group, mindfulness group and control group) for the groups’ RMSSD during sleep with age and gender as covariates. There was a significant interaction of time and group condition (F(2,84) = 18.46; p < 0.01, η 2 p = .30). Follow up t-tests showed that in the music and control groups there were no significant changes in RMSSD during sleep from pre to post, however in the mindfulness group there was a significant increase in RMSSD during sleep from pre to post (paired t = -7.46, df = 48; p < 0.01). The HF-HRV and the LF/HF ratio during nighttime for the three groups controlling for age and gender did not reveal significant differences between group and time in a mixed ANOVA for HF-HRV (F(2,84) = 1.47, p = .22) or LF/HF ratio (F(2,84) = 1.82, p = .18). Acute cardiovascular effects To investigate H2, namely music and mindfulness’ acute effect on heart rate variability, we investigated whether there was a difference in the groups’ pre RMSSD and their RMSSD while practicing mindfulness ( , left panel) or listening to music ( , right panel). Specifically, for the purpose of addressing H2, we used the daytime RMSSD from the chronic pre-measurement phase, i.e. the participants’ 48h HRV-measurement prior to the intervention, and in addition the acute RMSSD, i.e. from the 10 intervention sessions. Subsequently we computed a delta variable from participants’ acute RMSSD and subtracted it from the chronic daytime RMSSD. A mixed ANOVA controlling for age and gender, showed that there was no significant effect on the participants’ acute RMSSD between the two interventions. However, the mindfulness intervention produced a significant mean change in RMSSD of 12.99 ms (95% CI [8.42, 17.57]) and the music group produced a significant mean change in RMSSD 8.50 ms (95% CI [4.04, 12.97]) indicating an acute effect of both interventions. When looking at the HF-HRV and the LF/HF ratio during the acute phase for the two active intervention groups, we did not observe significant differences controlling for age and gender for either LF/HF ratio (F(9,29) = 1.44, p = .17) or HF-HRV (F(9,29) = 1.32, p = .24). Furthermore, to address H3 we sought to investigate the effects of respiration rate in the two groups during the acute phase. The mean respiration rate for the mindfulness group during mindfulness practice across the 10 interventions days was 14.05 times/min (SD = .29), while the mean respiration rate for the music groups whilst listening to music was 17.09 times/min (SD = .5). The groups’ mean respiration rate from the chronic phase was calculated both pre and post intervention for the 48-hour period . The same procedure for the above-mentioned mixed ANOVA was used. That is, the participant’s baseline respiration rate, i.e. the 48-hour pre-intervention measurement from the chronic phase was subtracted from their acute respiration rate during either mindfulness practice or music-listening. There was a significant interaction effect of group (mindfulness vs. music) and time (10 intervention days) (F(9,29) = 3.52, p = .005, η 2 p = .52) when controlling for age and gender. Mindfulness practice produced a significant mean change on the participants’ respiration rate (-3.5 times/min [CI: -4.00, -2.68]), there were no such significant effect on participants’ respiration rate in the music group . A mixed ANOVA with age and gender as covariate showed no significant interaction for the three groups (F(2,84) = .26; p = .77) as well as no significant main effect of group on respiration rate (F(2,84) = .06; p = .93) during the chronic phase. In addition, there was no significant main effect of time on respiration rate (F(1,84) = .07; p = .78). Practice dose-response and chronic cardiovascular effects Finally, we sought to investigate the relationship between day- and night-time RMSSD and dose-response for the music and mindfulness group ( and ). A delta variable was computed to probe whether the difference in RMSSD correlated with minutes of either mindfulness practice or music-listening. The delta variable was calculated as post RMSSD (night or day)–pre RMSSD (night or day). The Pearson correlation coefficient ( R ) for the mindfulness group’s RMSSD daytime and dose-response was significant at R = .47; p = .001; two-tailed ( , left panel), and the RMSSD during sleep and dose-response was significant at R = .44; p = .002; two-tailed ( , right panel). The results suggest that quantity of home practice had a significant impact on the change in RMSSD during day and night for the mindfulness group. For the music group there were no significant correlation for the daytime RMSSD and home practice. However, the Pearson correlation coefficient ( R ) for the music group’s RMSSD during sleep and home practice was significant at R = .36; p = .005; two-tailed (figure not shown).
display descriptive statistics with means and standard deviations for the three groups. A one-way ANOVA was conducted to investigate possible differences between the group’s descriptive statistics. There was no significant age difference between the groups (F(2,87) = .022, p = .97). Likewise, there was no significant difference in the music and mindfulness groups’ practice dose-response (paired t = -1.50, df = 51, p = .14). For the questionnaire data, at pre-intervention there were no significant differences between the three groups. A mixed ANOVA was used to inspect time (pre and post measurement) by condition (mindfulness, music and control) for the groups’ scores on MAAS, PSS and D3SQI questionnaires. For the MAAS there was a significant interaction between group and time (F(2,74) = 6.24; p = .003, η 2 p = .14). Follow up paired t test revealed that in the music ( p = .73) and control group ( p = .96) there were no significant changes in MAAS-score from pre to post measurement. However, in the mindfulness group there was a significant increase in MAAS-score from pre to post measurement (paired t = -3.9, df = 26, p = .001) indicating that the group’s subjective mindfulness level increased. For the PSS-questionnaire there was a significant interaction of time and condition (F(2,74) = 3.54: p = .034, η 2 p = .08). Follow up paired t tests revealed that both in the music ( p = .30) and control ( p = .85) groups there were no significant changes in PSS-score from pre to post measurement. However, in the mindfulness group there was a significant decrease in the PSS-score from pre to post measurement (paired t = 7.46, df = 26, p < 0.01) indicating significantly lower perceived stress for the mindfulness group. The questionnaire data on the D3SQI displayed no significant interaction between group and time ( p = .53) There was a significant effect of time (F(1,74) = 5.21; p = .025, η 2 p = .06). A follow up paired t-test showed that only the mindfulness group had a significant higher score on the D3SQI from baseline to post measurement ( paired t = -3.267, p = .003, df = 26), this was not the case for the music or the control group. Furthermore, there was a significant effect of group (F(2,74) = 3.90; p = .024, η 2 p = .09) with the mindfulness group showing a significant higher score on the D3SQI on post measurement than the two other groups.
To address H1 we computed the mean daytime RMSSD for the three groups ( , left panel). A mixed ANOVA was used to inspect time (pre and post) by condition (music group, mindfulness group and control group) for the groups’ RMSSD controlling for age and gender. There was a significant interaction of time and group condition (F(2,84) = 6.19; p = .003, η 2 p = .12). Follow up paired t-test showed that in the mindfulness group there was a significantly higher mean daytime RMSSD from pre to post measurement (paired t = -4.41, df = 48, p < .001). There were no significant differences in the active control group ( p = .45) and the non-active control group exhibited a significant lower mean daytime RMSSD from pre to post (paired t = 2.79, df = 57, p = .007). We also computed the HF-HRV and the LF/HF ratio during daytime for the three groups controlling for age and gender. A mixed ANOVA did not reveal significant differences between group and time for HF-HRV (F(2,84) = 1.23, p = .37) or LF/HF ratio (F(2,84) = 1.42, p = .24). To address H4, the mean nighttime RMSSD for the three groups was calculated ( , right panel). A mixed ANOVA was employed to inspect time (pre and post) by condition (music group, mindfulness group and control group) for the groups’ RMSSD during sleep with age and gender as covariates. There was a significant interaction of time and group condition (F(2,84) = 18.46; p < 0.01, η 2 p = .30). Follow up t-tests showed that in the music and control groups there were no significant changes in RMSSD during sleep from pre to post, however in the mindfulness group there was a significant increase in RMSSD during sleep from pre to post (paired t = -7.46, df = 48; p < 0.01). The HF-HRV and the LF/HF ratio during nighttime for the three groups controlling for age and gender did not reveal significant differences between group and time in a mixed ANOVA for HF-HRV (F(2,84) = 1.47, p = .22) or LF/HF ratio (F(2,84) = 1.82, p = .18).
To investigate H2, namely music and mindfulness’ acute effect on heart rate variability, we investigated whether there was a difference in the groups’ pre RMSSD and their RMSSD while practicing mindfulness ( , left panel) or listening to music ( , right panel). Specifically, for the purpose of addressing H2, we used the daytime RMSSD from the chronic pre-measurement phase, i.e. the participants’ 48h HRV-measurement prior to the intervention, and in addition the acute RMSSD, i.e. from the 10 intervention sessions. Subsequently we computed a delta variable from participants’ acute RMSSD and subtracted it from the chronic daytime RMSSD. A mixed ANOVA controlling for age and gender, showed that there was no significant effect on the participants’ acute RMSSD between the two interventions. However, the mindfulness intervention produced a significant mean change in RMSSD of 12.99 ms (95% CI [8.42, 17.57]) and the music group produced a significant mean change in RMSSD 8.50 ms (95% CI [4.04, 12.97]) indicating an acute effect of both interventions. When looking at the HF-HRV and the LF/HF ratio during the acute phase for the two active intervention groups, we did not observe significant differences controlling for age and gender for either LF/HF ratio (F(9,29) = 1.44, p = .17) or HF-HRV (F(9,29) = 1.32, p = .24). Furthermore, to address H3 we sought to investigate the effects of respiration rate in the two groups during the acute phase. The mean respiration rate for the mindfulness group during mindfulness practice across the 10 interventions days was 14.05 times/min (SD = .29), while the mean respiration rate for the music groups whilst listening to music was 17.09 times/min (SD = .5). The groups’ mean respiration rate from the chronic phase was calculated both pre and post intervention for the 48-hour period . The same procedure for the above-mentioned mixed ANOVA was used. That is, the participant’s baseline respiration rate, i.e. the 48-hour pre-intervention measurement from the chronic phase was subtracted from their acute respiration rate during either mindfulness practice or music-listening. There was a significant interaction effect of group (mindfulness vs. music) and time (10 intervention days) (F(9,29) = 3.52, p = .005, η 2 p = .52) when controlling for age and gender. Mindfulness practice produced a significant mean change on the participants’ respiration rate (-3.5 times/min [CI: -4.00, -2.68]), there were no such significant effect on participants’ respiration rate in the music group . A mixed ANOVA with age and gender as covariate showed no significant interaction for the three groups (F(2,84) = .26; p = .77) as well as no significant main effect of group on respiration rate (F(2,84) = .06; p = .93) during the chronic phase. In addition, there was no significant main effect of time on respiration rate (F(1,84) = .07; p = .78).
Finally, we sought to investigate the relationship between day- and night-time RMSSD and dose-response for the music and mindfulness group ( and ). A delta variable was computed to probe whether the difference in RMSSD correlated with minutes of either mindfulness practice or music-listening. The delta variable was calculated as post RMSSD (night or day)–pre RMSSD (night or day). The Pearson correlation coefficient ( R ) for the mindfulness group’s RMSSD daytime and dose-response was significant at R = .47; p = .001; two-tailed ( , left panel), and the RMSSD during sleep and dose-response was significant at R = .44; p = .002; two-tailed ( , right panel). The results suggest that quantity of home practice had a significant impact on the change in RMSSD during day and night for the mindfulness group. For the music group there were no significant correlation for the daytime RMSSD and home practice. However, the Pearson correlation coefficient ( R ) for the music group’s RMSSD during sleep and home practice was significant at R = .36; p = .005; two-tailed (figure not shown).
This study has examined the impact of mindfulness practice on chronic as well as acute HRV effects compared to an active-control group and a non-intervention control group. We tested the effects of mindfulness in a naturalistic as opposed to a lab-based setting by designing a study which engaged participants in long-term HRV recordings. The study tested four hypotheses: H1) Mindfulness practice would increase the HRV response in the chronic phase during daytime in the mindfulness group post-training compared to pre-training. H2) The HRV response would increase during the acute phase over the 10-day practice period in the mindfulness group. H3) Mindfulness practice would decrease respiration rate in the acute practice phase and not in the chronic phase. H4) Mindfulness practice would increase the HRV response during sleep in the mindfulness group post-training compared to pre-training. We did find statistical support for H1, namely that the HRV would increase during the daytime in the mindfulness group. As predicted the study found support for H2, whereby the mindfulness group and surprisingly also the music group showed elevated HRV responses during the daily guided training sessions. Furthermore, we found support for H3, as we showed that respiration rate during the acute phase was reduced in the mindfulness group, but not in the chronic phase. In support of H4, we found that the mindfulness group displayed an elevated response in the HRV signal from pre- to post-intervention compared to the two other groups during sleep. In the following we will discuss the results arising from the main hypotheses, and in addition also address the results obtained from the questionnaire data across the three groups as well as the results obtained from the training dose-response across the two active intervention groups. Mindfulness and respiration rate Recent evidence has found an association between formal mindfulness practice and decreased respiration rate . Decreased respiration rate during mindfulness has also been shown to positively correlate in long-term mindfulness practitioners . It seems that decreased respiration rate is a general trait encountered across the mindfulness spectrum from novices to experienced practitioners. Hence, it is indeed in line with previous studies that we found attenuated respiration rate in the mindfulness group during the daily mindfulness sessions . We also found that the music group did not exhibit differences in respiration rate whilst listening to music, suggesting that reduced respiration rate is indeed specific to mindfulness practice. Finally, by tracking HRV both pre and post mindfulness practice, that is in periods when participants were instructed not to perform mindfulness practice, we did however not observe significant changes in respiration outside of formal mindfulness practice. Taken together, these observations suggest that attenuated respiration is specifically present during formal mindfulness practice sessions. This finding of course is in line with the concept of mindfulness where attention to breathing serves as a fundamental component of mindfulness practice which corroborates the findings mentioned above from previous research demonstrating decreased respiration rate during mindfulness . In support of the abovementioned changes in respiration in the mindfulness group, we found that the acute HRV response over the course of the formal mindfulness practice sessions exhibited an increase relative a non-mindfulness baseline . This observation has also been reported in previous studies . However, as we found the HRV response to be elevated both in the daytime and in the nighttime in the chronic phase , it indicates that respiration is not solely responsible for the increases in HRV during mindfulness practice in the acute phase. That is, in the chronic phase there were no differences across the three groups (pre and post) in respiration , and yet the RMSSD was elevated in the mindfulness group in the chronic phases, which we discuss below. Mindfulness and trait-dependent effects Mindfulness practice thus appear to be driving the changes reflected in the HRV response . This physiologically mediated effect may be interpreted to reflect that the practice of mindfulness involves intentionally directing attention to one’s experience in the present moment , and that this repeated practice carries trait-dependent effects in the practioner in terms of being better able to center awareness on present moment experiences also in periods when no formal mindfulness practice is taking place. In support hereof, research has shown that mindfulness practice entails increase in a variety of psychological factors such as working memory, self-control, emotion regulation and attention . Specifically, parasympathetic influences on HRV is related to elevated cognitive control in the context of cognitive tasks . Mindfulness practice entails frequently becoming distracted by repetitive lapses in attention and returning the attention to the present moment, by centering awareness on the present moment experiences . Presumably, over the course of training cognitive capacity gradually improves sustained attention from this repetitive exercise of focusing the attention to the present moment . The implication of this line of research in the context of the present study might be that increased cognitive capacity arising from mindfulness practice result in reduced susceptibility to stress during the daytime. Such an interpretation is in agreement with the observed results in the present study where increases in the HRV response during the chronic phases (day- and nighttime) were specific to the mindfulness group and not the active-control group . However, future studies are needed to corroborate this interpretation. For example, future studies should inspect if the elevated HRV response observed in the chronic post-intervention phase in the mindfulness is significantly higher in long-term practitioners relative to novice practitioners, and if the elevated HRV response in the chronic post-intervention phase translate into (correlate with) improved cognitive capacity. We did not observe differences in HF-HRV and LF-HF ratios across groups during the chronic and acute phases . These results are surprising in that previous mindfulness studies have reported differences in the frequency domain as a function of mindfulness practice . We speculate that as RMSSD has been reported to be less affected by respiratory rate as compared to frequency-domain measures that this might account for the differences in the significant differences in the temporal-domain, but not in the frequency-domain in the present study. However as the previous studies mentioned above did not report both temporal- and frequency-domain measures, more studies are needed that report the whole spectrum of HRV parameters to provide a more comprehensive vision of ANS adaptation to mindfulness practice. Mindfulness and self-reported stress versus HRV detection of stress We found evidence that self-reported stress was decreased over the 10-day intervention only in the mindfulness group as measured on the PSS . This result was supported by previous findings showing reduced self-reported stress from mindfulness practice (e.g. ). Specifically, the Headspace app used in the current study have been applied in previous research demonstrating effects pertaining to stress-relief such as overt self-reported stress . Our results showing a reduction on the PSS post intervention is in line with a previous study that found reduced self-reported stress on the PSS . In addition, self-reported mindfulness using the MAAS has in previous research been shown to be increased in line with the current results. However, the Headspace app have not previously been applied to measure covert physiological impact of stress. Importantly, as HRV has been shown to be an indicator of objective physiological stress , we hypothesized that online-based mindfulness practice on the physiological level would reflect a decreased stress response. We found that physiological stress in the daytime post-intervention was decreased (indexed as an increased HRV RMSSD response), only in the mindfulness group . This result indicates that participants in the mindfulness group experienced decreased objective physiological and self-perceived stress. Indeed, this result was further corroborated by a strong correlation between the time spend on daily mindfulness practice and the RMSSD (day and night) . Finally, self-reported mindfulness traits as measured on the MAAS was significantly elevated in the mindfulness relative to the control groups, which is supported by previous findings . The present study demonstrates proof-of-concept of applying real-time measurement such as HRV, which provides a fine-tuned objective assessment of a person’s state of mind and body at any given moment (even during sleep). The capability of visualizing the effects of HRV demonstrate not only that mindfulness practice exerts profound effects on the HRV response, but also how and when mindfulness exerts an impact on the underlying HRV. Mindfulness and sleep quality We found an elevated HRV response during sleep in the mindfulness group relative to the two other groups. This finding extends previous findings in important ways. Specifically, previous studies have demonstrated that mindfulness practice exerts positive effects on self-perceived sleep quality . However, no studies have to our knowledge shown that the HRV response is increased during sleep in the context of a brief 10-day mindfulness practice intervention. There were no effects on sleep quality as measured through HRV in the two other groups. Previous studies have associated poor sleep quality with elevated sympathetic activity and suppressed parasympathetic activity . We observed the opposite pattern in the current study namely that mindfulness practice entailed an increased HRV and thus increased physiological indicies of sleep quality. This finding was further corroborated with results from the self-report questionnarie (D3SQI) indexing sleep quality, as well as previous research , where the mindfulness group reported better sleep quality over the 10-day intervention. The mindfulness group reported significanly higer levels of sleep quality compared to the two control groups. It is however interesting that although we did not find physiological evidence of increased sleep quality in the two control groups both groups reported significantly increased sleep quality from pre to post on the D3SQI questionnaire. Further support of the increased sleep quality reported by the mindfulness group comes from the positive correlation between the HRV response during both day-and nighttime . In addition, we also found that music-listening (i.e. the active control group) exhibited a positive correlation (figure not shown) with the HRV response. As poor sleep is associated with increased risk of cardiovascular disease and associated with mood and anxiety symptomatology , it is important to investigate the salutary effects of both mindfulness and music listening as interventions aiming to increase sleep quality. Another possibility is that mindfulness practice indeed affected attention as shown in previous research which may have reduced fatigue and thus improved sleep quality. Admittedly, this interpretation is speculative and future studies should be designed to address this possibility. Music and HRV In the music group there were no significant effects observed arising from music-listening when comparing the pre HRV response to the post-intervention HRV response during the chronic phase . There was, however, a significant effect on the group’s acute RMSSD during the daily music sessions . This finding is of particular interest in that it suggests that music may elevate the physiological response, albeit to a lower degree than mindfulness. It has to our knowledge not previously been shown that music in an ecological setting, i.e. whilst participants are engaged in music listening in their home or at work over a 10-day period, can influence the HRV response. We have in our previous work demonstrated that music (specifically binaural beats) exerts positive influence over cognitive processes, albeit tested in a ‘non-ecological’ setting, i.e. in a lab-based context . Previous findings have reported mixed results of music’s effect on HRV . But studies have found acute effects of music on physiological activity indicating that the music’s frequency can affect heart rate, with some studies showing that low frequency music decreases sympathetic activity .
Recent evidence has found an association between formal mindfulness practice and decreased respiration rate . Decreased respiration rate during mindfulness has also been shown to positively correlate in long-term mindfulness practitioners . It seems that decreased respiration rate is a general trait encountered across the mindfulness spectrum from novices to experienced practitioners. Hence, it is indeed in line with previous studies that we found attenuated respiration rate in the mindfulness group during the daily mindfulness sessions . We also found that the music group did not exhibit differences in respiration rate whilst listening to music, suggesting that reduced respiration rate is indeed specific to mindfulness practice. Finally, by tracking HRV both pre and post mindfulness practice, that is in periods when participants were instructed not to perform mindfulness practice, we did however not observe significant changes in respiration outside of formal mindfulness practice. Taken together, these observations suggest that attenuated respiration is specifically present during formal mindfulness practice sessions. This finding of course is in line with the concept of mindfulness where attention to breathing serves as a fundamental component of mindfulness practice which corroborates the findings mentioned above from previous research demonstrating decreased respiration rate during mindfulness . In support of the abovementioned changes in respiration in the mindfulness group, we found that the acute HRV response over the course of the formal mindfulness practice sessions exhibited an increase relative a non-mindfulness baseline . This observation has also been reported in previous studies . However, as we found the HRV response to be elevated both in the daytime and in the nighttime in the chronic phase , it indicates that respiration is not solely responsible for the increases in HRV during mindfulness practice in the acute phase. That is, in the chronic phase there were no differences across the three groups (pre and post) in respiration , and yet the RMSSD was elevated in the mindfulness group in the chronic phases, which we discuss below.
Mindfulness practice thus appear to be driving the changes reflected in the HRV response . This physiologically mediated effect may be interpreted to reflect that the practice of mindfulness involves intentionally directing attention to one’s experience in the present moment , and that this repeated practice carries trait-dependent effects in the practioner in terms of being better able to center awareness on present moment experiences also in periods when no formal mindfulness practice is taking place. In support hereof, research has shown that mindfulness practice entails increase in a variety of psychological factors such as working memory, self-control, emotion regulation and attention . Specifically, parasympathetic influences on HRV is related to elevated cognitive control in the context of cognitive tasks . Mindfulness practice entails frequently becoming distracted by repetitive lapses in attention and returning the attention to the present moment, by centering awareness on the present moment experiences . Presumably, over the course of training cognitive capacity gradually improves sustained attention from this repetitive exercise of focusing the attention to the present moment . The implication of this line of research in the context of the present study might be that increased cognitive capacity arising from mindfulness practice result in reduced susceptibility to stress during the daytime. Such an interpretation is in agreement with the observed results in the present study where increases in the HRV response during the chronic phases (day- and nighttime) were specific to the mindfulness group and not the active-control group . However, future studies are needed to corroborate this interpretation. For example, future studies should inspect if the elevated HRV response observed in the chronic post-intervention phase in the mindfulness is significantly higher in long-term practitioners relative to novice practitioners, and if the elevated HRV response in the chronic post-intervention phase translate into (correlate with) improved cognitive capacity. We did not observe differences in HF-HRV and LF-HF ratios across groups during the chronic and acute phases . These results are surprising in that previous mindfulness studies have reported differences in the frequency domain as a function of mindfulness practice . We speculate that as RMSSD has been reported to be less affected by respiratory rate as compared to frequency-domain measures that this might account for the differences in the significant differences in the temporal-domain, but not in the frequency-domain in the present study. However as the previous studies mentioned above did not report both temporal- and frequency-domain measures, more studies are needed that report the whole spectrum of HRV parameters to provide a more comprehensive vision of ANS adaptation to mindfulness practice.
We found evidence that self-reported stress was decreased over the 10-day intervention only in the mindfulness group as measured on the PSS . This result was supported by previous findings showing reduced self-reported stress from mindfulness practice (e.g. ). Specifically, the Headspace app used in the current study have been applied in previous research demonstrating effects pertaining to stress-relief such as overt self-reported stress . Our results showing a reduction on the PSS post intervention is in line with a previous study that found reduced self-reported stress on the PSS . In addition, self-reported mindfulness using the MAAS has in previous research been shown to be increased in line with the current results. However, the Headspace app have not previously been applied to measure covert physiological impact of stress. Importantly, as HRV has been shown to be an indicator of objective physiological stress , we hypothesized that online-based mindfulness practice on the physiological level would reflect a decreased stress response. We found that physiological stress in the daytime post-intervention was decreased (indexed as an increased HRV RMSSD response), only in the mindfulness group . This result indicates that participants in the mindfulness group experienced decreased objective physiological and self-perceived stress. Indeed, this result was further corroborated by a strong correlation between the time spend on daily mindfulness practice and the RMSSD (day and night) . Finally, self-reported mindfulness traits as measured on the MAAS was significantly elevated in the mindfulness relative to the control groups, which is supported by previous findings . The present study demonstrates proof-of-concept of applying real-time measurement such as HRV, which provides a fine-tuned objective assessment of a person’s state of mind and body at any given moment (even during sleep). The capability of visualizing the effects of HRV demonstrate not only that mindfulness practice exerts profound effects on the HRV response, but also how and when mindfulness exerts an impact on the underlying HRV.
We found an elevated HRV response during sleep in the mindfulness group relative to the two other groups. This finding extends previous findings in important ways. Specifically, previous studies have demonstrated that mindfulness practice exerts positive effects on self-perceived sleep quality . However, no studies have to our knowledge shown that the HRV response is increased during sleep in the context of a brief 10-day mindfulness practice intervention. There were no effects on sleep quality as measured through HRV in the two other groups. Previous studies have associated poor sleep quality with elevated sympathetic activity and suppressed parasympathetic activity . We observed the opposite pattern in the current study namely that mindfulness practice entailed an increased HRV and thus increased physiological indicies of sleep quality. This finding was further corroborated with results from the self-report questionnarie (D3SQI) indexing sleep quality, as well as previous research , where the mindfulness group reported better sleep quality over the 10-day intervention. The mindfulness group reported significanly higer levels of sleep quality compared to the two control groups. It is however interesting that although we did not find physiological evidence of increased sleep quality in the two control groups both groups reported significantly increased sleep quality from pre to post on the D3SQI questionnaire. Further support of the increased sleep quality reported by the mindfulness group comes from the positive correlation between the HRV response during both day-and nighttime . In addition, we also found that music-listening (i.e. the active control group) exhibited a positive correlation (figure not shown) with the HRV response. As poor sleep is associated with increased risk of cardiovascular disease and associated with mood and anxiety symptomatology , it is important to investigate the salutary effects of both mindfulness and music listening as interventions aiming to increase sleep quality. Another possibility is that mindfulness practice indeed affected attention as shown in previous research which may have reduced fatigue and thus improved sleep quality. Admittedly, this interpretation is speculative and future studies should be designed to address this possibility.
In the music group there were no significant effects observed arising from music-listening when comparing the pre HRV response to the post-intervention HRV response during the chronic phase . There was, however, a significant effect on the group’s acute RMSSD during the daily music sessions . This finding is of particular interest in that it suggests that music may elevate the physiological response, albeit to a lower degree than mindfulness. It has to our knowledge not previously been shown that music in an ecological setting, i.e. whilst participants are engaged in music listening in their home or at work over a 10-day period, can influence the HRV response. We have in our previous work demonstrated that music (specifically binaural beats) exerts positive influence over cognitive processes, albeit tested in a ‘non-ecological’ setting, i.e. in a lab-based context . Previous findings have reported mixed results of music’s effect on HRV . But studies have found acute effects of music on physiological activity indicating that the music’s frequency can affect heart rate, with some studies showing that low frequency music decreases sympathetic activity .
Limitations in the study included that although the non-intervention control group was requested to maintain their daily and nightly routines, we did not (as was the case in the two active intervention groups) track the non-intervention control group through daily practice cycles. Thus, we did not have probes on their activity level across the 10-day period to the same extend as in the two active intervention groups including daily-HRV acute measurements. As a potential implication, it may be that the HRV results have been skewed to reflect an elevated activity level which may have reduced their HRV response over the course of the 10-day period. However, we did track proxies for activity levels (specifically V02 and step counts) during the chronic measurement phases which showed no significant differences across groups . These results suggest that differences in activity levels may not necessarily be attributed to the differences in the HRV results. Studies have shown that respiratory rate and tidal volume exert influences in heart rate . However, as we did not adjust for respiration, this could be a limitation in the study. While controlling or adjusting for respiratory influences either statistically or through breathing exercises on a theoretical level makes sense , it is not necessarily straightforward on a methodological level . Specifically, in the current study we did not infer the role of respiratory rate which should be counted as a limitation in interpreting the results in these results.
The overall goal of this study was to probe the distinction between acute and chronic cardiovascular changes in mindfulness practice. Another goal was to investigate cardiovascular effects of mindfulness in a naturalistic setting as opposed to a lab-based environment. The effects of mindfulness on cardiovascular changes were consistent with our expectations in that the results showed pronounced effects on the HRV RMSSD response during daytime and during sleep in periods when no formal mindfulness practice was taking place. Furthermore, during the daily mindfulness sessions HRV was elevated in the mindfulness group and music group. These results demonstrate causal effects of mindfulness training and provides support for the argument that a brief 10-day online-based mindfulness intervention exert positive impact on both chronic and acute effects on HRV. Finally, the work highlights the potential of applying HRV in naturalistic settings as a means for tracking stress regulation throughout the day.
S1 Table Chronic HRV variables in the time and frequency domain. Data is summarized for the three groups shown as mean and standard deviation. (DOCX) Click here for additional data file. S2 Table Acute HRV variables in the time and frequency domain. Data is summarized for the three groups shown as mean and standard deviation. (DOCX) Click here for additional data file. S3 Table V02 and step count during the chronic phase to account for activity levels across groups. (DOCX) Click here for additional data file. S1 File Lists the demographics, physiological and questionnaire data included in the analysis. (XLSX) Click here for additional data file. S2 File Lists the physiological data included in the analysis of the acute phase . (XLSX) Click here for additional data file.
|
Executive Summary of the Early-Onset Breast Cancer Evidence Review Conference | dced17a2-489d-4034-bed2-2c9079a5ee0b | 7253192 | Gynaecology[mh] | The American College of Obstetricians and Gynecologists convened an expert panel to identify the best evidence and practices from the literature, existing relevant society guidelines, and available validated specific or generalizable clinical tools. The panel was recruited from the Society for Academic Specialists in General Obstetrics and Gynecology to review and summarize the evidence. Panel members were required to have expertise in evidence review and synthesis. Subspecialty expertise in breast disease was also sought. Several of the panel members had completed subspecialty fellowship training in breast disease. The panel developed 10 separate research questions and used the PICO criteria (P=patient, problem, or population; I=intervention; C=comparison, control, or comparator; O=outcome[s]) to frame the literature review. These questions form the organizing basis for this executive summary. Experts in literature searches from the ACOG Resource Center searched the Cochrane Library, MEDLINE through Ovid, and PubMed for references not indexed through MEDLINE from January 2010 to January 2019. Literature was organized by level of evidence. Published guidelines were categorized separately from references. A primary reviewer was assigned to each topic to review titles and abstracts, then the entire manuscript when appropriate. Panel members expanded the search criteria when necessary, either increasing the timeframe or broadening the search to other populations, particularly when inadequate evidence was found on the 18–45 years age group. Reference lists from papers found in the search were also reviewed. Internet searches with standard search engines were performed to seek guidelines, recommendations, and tools that might not have been published in peer-reviewed publications. Relevant information was compiled into an evidence summary template. Completed templates were then reviewed by a secondary reviewer and the primary and secondary reviewer worked together on revisions in response to the secondary reviewer's comments. The American College of Obstetricians and Gynecologists convened the Early-Onset Breast Cancer Evidence Review Conference in Washington, DC, April 1–2, 2019, including the panel members and representatives from stakeholder professional and patient advocacy organizations (Table ). Panel members presented their reviews to the convened group, which discussed each section. Comments from the discussion were integrated into the review summary by the primary reviewer. The revised summaries were sent to a tertiary reviewer for final review, and final revisions were made by the primary reviewer. The final reviews (see Appendices 1–10) were used to develop the educational material.
Breast cancer is the most common form of cancer in women and represents the second leading cause of cancer death in women. National Cancer Institute data from 2012 to 2016 indicated that 1.9% of new breast cancer cases and 0.9% of cancer deaths occurred among women aged 20–34 years, and 8.4% of new breast cancer cases and 4.7% of breast cancer deaths occurred among women aged 35–44 years. Black women had the highest death rate at 28.1 per 100,000 persons. Although 5-year relative survival rates were largely similar across age groups, women younger than age 45 years had among the lowest rates, second only to women aged 75 years and older. , See Table 1 in Appendix 1, available online at http://links.lww.com/AOG/B864 , for breast cancer incidence rates by age and race. See Table 2 in Appendix 1 ( http://links.lww.com/AOG/B864 ) for breast cancer mortality rates by age and race. Younger women tend to have more aggressive and biologically unfavorable tumor subtypes than older women and poorer survival in early stage disease (stages I and II) when compared with women older than 40 years. In advanced stages, younger women have lower mortality, likely because of overall general health. Although mortality trends have improved in all women, young black women continue to have higher mortality rates than other young women with breast cancer, irrespective of stage or hormone receptors. Annual hazard rates of death of young black women are improving more slowly than other races and ethnicities, suggesting less benefit from advances in treatment. Poorer prognosis in black women is thought to result from multiple factors, including more aggressive tumors, access barriers, and social determinants of health (see Appendix 1 [ http://links.lww.com/AOG/B864 ] for complete evidence summary).
Cancer genes such as autosomal dominant single gene pathogenic variants account for approximately 5–10% of all cases of breast cancer. The BRCA1 and BRCA2 genes are the most common, representing more than 50% of all genes associated with early-onset breast cancer. Women who carry pathogenic variants have an increased lifetime risk of breast and other cancers and are at higher risk of developing early-onset breast cancer. BRCA pathogenic variants occur more frequently in certain populations (Table ), most notably in persons of Ashkenazi Jewish descent. The prevalence of BRCA1 and BRCA2 pathogenic variants is 1 in 40 (2.5%) in Ashkenazi Jews, compared with the general population prevalence of 1 in 400–600. , In Ashkenazi Jews, three site-specific founder mutations have been identified (185delAG and 5382insC in BRCA1 and 6174delT in BRCA2 ), representing more than 90% of the BRCA mutations. In the United States, African American women have a lower incidence of breast cancer than Caucasian women, but higher breast cancer mortality rates. The higher mortality rate seems to be associated with two patterns: proportionally more African American women are diagnosed before 50 years of age (30–40% of all breast cancers in African American women) compared with Caucasian women (approximately 20% of all breast cancer in Caucasian women), and African American women have a twofold higher rate of breast cancers that lack expression of the estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2, known as triple-negative cancer. Triple-negative tumors are biologically more active, with higher recurrence and mortality rates compared with most other breast cancer phenotypes. , These differences do not appear to be due to higher carriage rates of single gene mutations such as BRCA1 and BRCA2 alone. Currently, population-based screening for BRCA genes in the absence of other risk factors is not broadly recommended, given their rarity and the uncertain benefit of large-scale testing. Because Ashkenazi Jews have a 10-fold increased risk of carrying a founder mutation in BRCA1 or BRCA2 , consensus guidelines recommend offering routine testing for the three specific mutations. , The National Comprehensive Cancer Network, ACOG, the U.S. Preventive Services Task Force, the American Society of Breast Surgeons, and the American College of Medical Genetics provide recommendations for risk assessment, referral to genetic counseling or offering of genetic testing based on risk identification, and management of men and women identified with a genetic predisposition for early-onset breast cancer (see Table 2 in Appendix 2, available online at http://links.lww.com/AOG/B865 ). Common factors considered in risk assessment include the following: Personal history of breast, ovarian, tubal, pancreatic, prostate, and other cancers and either early age of onset of these cancers or other cancer-specific factors that increase the likelihood of carrying a pathogenic variant in a breast cancer gene (eg, triple-negative tumors). Family history of breast, ovarian, tubal, pancreatic, prostate, and other cancers suggesting an autosomal dominant pattern of inheritance. In addition to BRCA1 and BRCA2 , other important but less common autosomal dominant genes are associated with early-onset breast cancer risk. Panel testing has emerged in the past few years to assess for possible gene alterations that have been implicated in early-onset breast cancer. The specific panels are usually defined by the laboratory offering the testing. A woman identified with a pathogenic variant placing her at increased risk for early-onset breast cancer can undergo increased surveillance to detect breast cancer at earlier stages, risk-reduction surgery, or chemoprophylaxis. Depending on the gene, surveillance may start at an earlier age and include mammography, magnetic resonance imaging (MRI), or both. The natural history of early-onset breast cancer is fairly well understood for some genes (eg, BRCA), but there is less complete understanding of the penetrance and age of onset for those with non- BRCA genes associated with breast cancer. Table provides an overview of common genes included in panel testing, along with recommendations for surveillance and risk reduction (see Appendix 2 [ http://links.lww.com/AOG/B865 ] for complete evidence summary).
Assessment of family history is essential when evaluating young women accessing primary care. Understanding a woman's family history of breast cancer can identify individuals at elevated risk for hereditary breast cancer or women who would benefit from increased breast cancer surveillance. The American College of Obstetricians and Gynecologists, the Society of Gynecologic Oncologists, the U.S. Preventive Services Task Force, the National Institute of Health Care Excellence, and the National Comprehensive Cancer Network have published guidelines recommending assessment of family history and screening for patients at increased risk of breast cancer. The American College of Obstetricians and Gynecologists states that screening should include at minimum a personal cancer history and first- and second-degree relatives' cancer history that includes a description of the type of primary cancer, the age of onset, and the lineage of the family member. The National Comprehensive Cancer Network clinical guidelines recommend genetic assessment for all patients with first- and second-degree relatives diagnosed with breast cancer younger than age 50 years. The U.S. Preventive Services Task Force recommends screening of women who have family members with breast, ovarian, tubal, or peritoneal cancer using one of several screening tools designed to identify a family history that may be associated with an increased risk for potentially harmful mutations in breast cancer susceptibility genes ( BRCA1 or BRCA2 ). Women with positive screening results should receive genetic counseling and, if indicated after counseling, BRCA testing. Genetic counselors can help determine which of the many available panels of genetic testing are most appropriate and cost-effective. Women with deleterious genetic mutations tend to present with breast cancer at an earlier age. However, some studies suggest that women with a positive family history and no known genetic mutation are at increased risk of developing breast cancer and these cancers occur at an earlier age compared with those in the general population who did not have a known mutation. – The Nurses' Health Study and a systematic review and meta-analysis by Pharoah et al identified consistent findings. , In the Nurses' Health Study, women with a family member diagnosed with breast cancer before age 50 years had an increased risk for breast cancer compared with women of the same age who had family members diagnosed at older ages. Compared with women with no family history, those whose mother was diagnosed before age 50 years had an adjusted relative risk (RR) of 1.69 (95% CI 1.39–2.05), and those whose mother was diagnosed at 50 or older had an RR of 1.37 (95% CI 1.22–1.53). Pharaoh et al found that a history of breast cancer in at least one first-degree relative resulted in RR estimates ranging from 1.2 to 8.8, with most studies showing RRs between 2 and 3. The pooled risk estimate for having two affected first-degree relatives was 3.6 (95% CI 2.5–5.0). Genetic mutations were not factored out in many of the older studies. There are limited data on outcomes for women with an elevated risk of breast cancer by family history without an established familial genetic mutation. National guidelines consistently emphasize the importance of gathering a thorough family history of breast cancer. However, these guidelines are based on limited data estimating lifetime and age-based breast cancer risk for women in families that do not have identified genetic mutation carriers. Many of the current guidelines are based on expert opinion and studies of family history that were published before the availability of genetic testing for mutations such as BRCA1 and BRCA2 . There is general consensus that women with a lifetime risk of breast cancer greater than 20%, as determined by any model, are at high risk. Multiple validated models can be used to determine the probability of a genetic mutation, which increases the risk of breast cancer. There is no consensus and there are no data to support the recommendation of one model over another. Currently, the National Comprehensive Cancer Network recommends that women with an estimated lifetime risk of breast cancer of 20% or higher, determined by models largely based on family history (eg, Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Claus, BRCAPRO, or Tyrer-Cuzick) should be offered annual mammography screening starting at age 30 years and annual breast screening by MRI starting at age 25 years , (see Appendix 3, available online at http://links.lww.com/AOG/B866 , for complete evidence summary). This is in contrast to screening recommendations for average risk women, which all recommend screening with mammography alone, starting at age 40–50 years, depending on the source.
There are at least moderate-quality data that risk assessment, referral for genetic counseling, and genetic testing provide net benefit in women at high risk for early-onset breast cancer. These steps can form the basis for intensive surveillance for early detection or use of risk-reduction methods that have proven effective in detecting breast cancer at an earlier stage and decreasing mortality rates. The National Institutes of Health maintains a periodically updated list of online resources designed to educate and assist health care providers on various topics ranging from basic genetics, understanding risk assessment, criteria for referral to genetic counseling, and interpretation of genetic test results. Other national societies have created genetics “toolkits” or published guidance to educate health care providers on basic cancer genetics, risk assessment, and referral recommendations , (see Table 1 in Appendix 4, available online at http://links.lww.com/AOG/B867 , for a list of useful websites). Providers can also learn about these topics through other mechanisms, such as continuing medical education and online learning. The depth and detail of the material covered range from superficial (eg, short “expert” videos) to online courses that take place over several months. Very few online courses provide a validated assessment of competency or certification. The content of specific training and assessment of competency for physicians who counsel patients about genetic testing have not been standardized. The U.S. Preventive Services Task Force concluded that health care providers should assess risk based on personal or family history and refer women who screen positive to cancer genetic counselors. A number of validated tools exist to determine who should be referred for genetic testing, – and several professional specialty societies have developed lists of indications for referral and testing. , , These tools are specifically designed to evaluate who should be referred for BRCA testing; however, because BRCA carriers represent the greatest proportion of women at genetic risk for early-onset breast cancer, these tools are reasonable proxies for genetic screening for early-onset breast cancer. These tools have been validated in some populations (non-Hispanic white women), and it is not known how the tools perform in nonwhite populations. It remains unclear how frequently these tools are used in practice by physicians. Evaluation suggests that the tools miss a substantial proportion of carriers. , Interpretation of genetic test results can be complex and usually requires a qualified individual who has specific training in cancer genetics. , , A number of tools and calculators are used to estimate lifetime invasive breast cancer risk, but not necessarily the predicted age of onset (See Table 2 in Appendix 4, http://links.lww.com/AOG/B867 , for a comparison of four commonly used risk-assessment models: Tyrer-Cuzick, the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm, Claus, and the modified Gail model, also called the Breast Cancer Risk Assessment Tool). Numerous national consensus guidelines and recommendations have been developed to assist health care providers in communicating with patients about referral to genetic counseling or testing for early-onset breast cancer genes or both. , , Some specialty societies have produced separate guidance specifically addressing both the interpretation of genetic test results and how to communicate these results to patients. Some guidelines are frequently updated, whereas others are periodically revised (ie, every few years), , , resulting in guidance that may differ, causing confusion among health care providers and patients. All current guidelines recommend that women should be screened for personal and family history of breast and other related cancers and referred for genetic counseling or testing or both as appropriate. In addition, all guidelines recommend that determination for testing and pretest and posttest counseling should be performed by individuals with appropriate training. However, there is a shortage of genetic counselors in the United States, which has been identified as a barrier to effective counseling , (see Appendix 4 [ http://links.lww.com/AOG/B867 ] for complete evidence summary).
Breast tissue is comprised of fibroglandular tissue and fat. The fibroglandular tissue is a mixture of fibrous stroma and ductal epithelium and appears denser or brighter on mammography because the X-rays are not able to penetrate at the same rate as fatty tissue. The Breast Imaging-Reporting and Data System for mammography developed by the American College of Radiology includes a subjective assessment of how much fibroglandular tissue is present (see Table 1 in Appendix 5, available online at http://links.lww.com/AOG/B868 ). As women age, breast tissue typically becomes less dense. Most of the data about breast density and cancer risk come from women older than age 50 years. Dense breasts are present in the majority of younger women. A systematic review of risk for breast cancer in women aged 40–49 years reported that extremely dense breasts were associated with an increased risk of breast cancer when compared with breasts with scattered fibroglandular densities (RR 2.04, 95% CI 1.84–2.26). In a more recent case-control study of 213 Korean women with breast cancer, women who had the highest breast density, described as 50% density or higher, after adjusting for multiple variable, had an adjusted odds ratio of 2.98 (95% CI 0.99–9.03) for breast cancer. The wide CIs in this nonsignificant finding is likely related to the small numbers of included women and future studies should be monitored. Median age in the study was 51.5 years, with 45% of cancers diagnosed before age 50 years. Older studies are harder to interpret because they used many different ways of characterizing breast density, but in general, when comparing the most dense with the least dense group, there appears to be an increased risk of breast cancer, with RRs as high as 4.64 (95% CI 3.64–5.91) reported. As the majority of premenopausal women have dense breasts, it is not clear that RRs estimated from comparisons of extremes of breast density categories are appropriate measures of risk in this age group. Dense breasts decrease the sensitivity of mammography because dense breast tissue appears radiopaque, similar to breast cancers, decreasing visual contrast (“masking”). In women with extremely dense breasts, mammography has 62% sensitivity for detection of breast cancer, compared with 88% sensitivity for women with fatty breasts. One way to assess delay in diagnosis is to determine the rate of interval cancers, those cancers found between recommended screening intervals after a normal mammogram. No studies evaluating masking due to breast density have exclusively evaluated women with early-onset breast cancer. Most studies included large proportions of women older than age 50 years, though women aged 40–49 years were included. More recent evidence suggests that dense breasts appear to be associated with at least a twofold increased risk of interval cancers as well as a worse prognosis, including larger tumor size and more node-positive disease. – Studies of adjunctive screening of women with dense breasts with ultrasonography and MRI generally noted higher cancer detection rates and earlier diagnoses, but also showed increase in biopsy for benign lesions and increased healthcare costs, and no study showed improvement in mortality (see Appendix 5 [ http://links.lww.com/AOG/B868 ] for complete evidence summary). The majority of women under age 46 years have dense breasts, so any recommendations for additional screening in this age group would require additional testing in a large number of women whose baseline risk is low. Most organizations, including ACOG and the U.S. Preventive Services Task Force, do not recommend additional screening in women younger than age 46 years with a normal mammogram and dense breasts. The Society of Breast Imaging expresses concern for a delay in diagnosis and later stage at diagnosis of noncalcified breast cancers because of dense breast tissue and suggests that ultrasonography may be of benefit, provided the woman is willing to accept an increased risk of false-positive results. The National Comprehensive Cancer Network recommends that women with mammographically dense breast tissue (heterogeneously or extremely dense tissue) be counseled about the risks and benefits of supplemental screening. Neither of these organizations specifically address dense breasts in younger women. Mandatory breast density reporting has been enacted as legislation in an increasing number of states. Many patients receive letters notifying them of their breast density, and interpretation of these letters can be challenging for patients and health care providers. In early 2019, Congress authorized the U.S. Food and Drug Administration to amend the Mammography Quality Standards Act of 1992 to include mandatory breast density reporting at the federal level. The public comment period for the proposed changes to the legislation ended in June 2019, and final regulations should be forthcoming. The American College of Obstetricians and Gynecologists recommends that health care providers comply with state laws that require disclosure of breast density in mammogram reports. Younger women with dense breasts and no other risk factors can be counseled that dense breasts are very common in this age group, and supplemental screening methods are available. However, they are not specifically recommended, have significant risk of false positives, and have not been shown to change outcome. When mammographic density in combination with other risk factors places the woman at above-average risk, additional screening with ultrasonography may be warranted and a shared decision-making model can be applied. Some breast cancer risk calculators integrate breast density and can be used to assess overall risk in these women (see Appendix 5 [ http://links.lww.com/AOG/B868 ] for complete evidence summary).
History of Proliferative Breast Disease Many proliferative breast diseases increase the risk of breast cancer, but the effect on early-onset breast cancer risk is unknown. Atypical ductal hyperplasia carries a more than 20% risk of ductal carcinoma in situ (DCIS) or invasive malignancy at the time of diagnosis, so it is typically excised. Both atypical ductal hyperplasia and atypical lobular hyperplasia are associated with a fourfold increased lifetime risk of breast cancer. – When atypical lobular hyperplasia is an incidental finding and there is concordance between radiologic and pathologic findings regarding the targeted biopsied lesion, it is less likely to be associated with a concurrent malignancy, so close monitoring is usually appropriate. Lobular carcinoma in situ is not considered a preinvasive malignancy like DCIS, but does significantly increase the lifetime risk of breast cancer (RR 6.9–11, absolute risk 7.1% over 10 years). , Pleomorphic lobular carcinoma in situ may increase that risk even further. Radial scars are characterized microscopically by a fibroelastic core with radiating ducts and lobules. Radial scars and complex sclerosing lesions carry an 8–15% risk of DCIS or invasive malignancy at the time of excision. , – Radial scars are usually managed by excisional biopsy. There are limited data with which to determine the optimal screening strategy after atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ. Breast MRI may improve breast cancer detection over mammography alone, but it is associated with more biopsies in this population. The National Comprehensive Cancer Network is the only professional society with screening recommendations for those who have had atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ : Annual mammography (not before age 30 years). Consider tomosynthesis. Consider annual breast MRI (not before age 25 years). Clinical breast examinations every 6–12 months. Engage in breast self-awareness (women should be familiar with their breasts and report changes to their health care provider promptly). Past or Present Use of Hormonal Contraception There have been conflicting data regarding the effect of hormonal contraception on breast cancer risk. A large meta-analysis in 1996 revealed a small increased risk of breast cancer among women with current or recent oral contraceptive use (RR 1.07, SD 0.02, P <.001). Similar findings were noted in a large cohort study in 2017 (RR 1.20, 95% CI 1.14–1.26). The absolute risk was quite small (one additional breast cancer diagnosis for every 7,690 women using hormonal contraception each year). In both studies, breast cancer risk returned to baseline 5–10 years after discontinuing hormonal contraception. , Most studies do not suggest an increased risk of breast cancer among women using a levonorgestrel intrauterine system (IUS) or depo-medroxyprogesterone injections. – There are limited data regarding the etonogestrel implant, but no study to date has demonstrated an increased breast cancer risk. The risks of hormonal contraception must be weighed against the health, social, and economic consequences of unplanned pregnancy, as well as the many noncontraceptive benefits of hormonal contraception. The maternal mortality rate in the United States in 2015 was 26.4 deaths per 100,000 pregnancies, which is comparable with the rate of excess breast cancer diagnoses (13 [95% CI 10–16]/100,000 person years) related to hormonal contraception suggested by the 2017 cohort study. , Hormonal contraception, particularly oral contraceptives, significantly decreases the risk of ovarian and endometrial cancers. , There are no screening guidelines that specifically address exposure to hormonal contraception, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. Past or Present Use of Fertility Treatments Many fertility treatments cause an increase in circulating estrogen and progesterone levels, which theoretically could increase future breast cancer risk. Most studies have demonstrated no change or a decreased risk of breast cancer after fertility treatments. Few studies specifically evaluated the risk of early-onset breast cancer. Very limited data suggest an increased risk of breast cancer among specific populations, including women exposed to many high-dose cycles of clomiphene citrate and women undergoing in vitro fertilization before age 24 years. , The American Society for Reproductive Medicine states that there is “fair evidence that fertility drugs are not associated with an increased risk of breast cancer (Grade B).” No screening guidelines specifically address fertility treatment exposure, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer. History of Radiation Exposure Chest radiation therapy before age 30 years is a well-established risk factor for early-onset breast cancer. , Treatments of concern include mantle radiation for Hodgkin's lymphoma and moderate-dose chest radiation therapy for non-Hodgkin's lymphoma, leukemia, bone malignancies, or pediatric solid tumors (eg, Wilms tumor, neuroblastoma, and soft-tissue sarcoma). The cumulative incidence of invasive breast cancer in these patients is 13–20% by age 40–45 years, similar to that seen among BRCA1 or BRCA2 mutation carriers. – Risk is greatest among women treated with 40 Gy or more, but all women treated with 20 Gy or more are at increased risk for early-onset breast cancer. , , This increased risk is evident 8–10 years after completion of radiation therapy and does not plateau at any point after treatment. – , Early initiation of breast cancer screening is effective for reducing stage at diagnosis in this population. Both mammography and breast MRI are effective screening studies after chest radiation therapy, but mammography has higher specificity. , – Multiple professional organizations have published screening guidelines for women with a history of chest radiation therapy (Table ). There are limited data to suggest superiority of one screening protocol over others. Shared decision making, including the discussion of risks of false positives and negatives, is recommended when deciding on a screening strategy. Prior Breast or Ovarian Cancer Breast cancer survivors remain at risk for a second breast cancer, but the risk for a second early-onset breast cancer among young breast cancer survivors is unknown. Among survivors of any age without a known cancer gene mutation, the risk of a second breast cancer is approximately 3% and 7% at 10 and 15 years after diagnosis, respectively. There are no data regarding risk of early-onset breast cancer in women with ovarian cancer in childhood, adolescence, or early adulthood. After breast cancer treatment, survivors require clinical and imaging follow-up to assess for recurrence and second malignancies. Both the National Comprehensive Cancer Network and the European Society of Breast Cancer Specialists recommend annual mammograms starting 6–12 months after completion of treatment. , Breast MRI should be considered in patients at high risk for a second cancer (eg, BRCA1 or BRCA2 mutation carriers) , (see Appendix 6 [ http://links.lww.com/AOG/B869 ] for complete evidence summary).
Many proliferative breast diseases increase the risk of breast cancer, but the effect on early-onset breast cancer risk is unknown. Atypical ductal hyperplasia carries a more than 20% risk of ductal carcinoma in situ (DCIS) or invasive malignancy at the time of diagnosis, so it is typically excised. Both atypical ductal hyperplasia and atypical lobular hyperplasia are associated with a fourfold increased lifetime risk of breast cancer. – When atypical lobular hyperplasia is an incidental finding and there is concordance between radiologic and pathologic findings regarding the targeted biopsied lesion, it is less likely to be associated with a concurrent malignancy, so close monitoring is usually appropriate. Lobular carcinoma in situ is not considered a preinvasive malignancy like DCIS, but does significantly increase the lifetime risk of breast cancer (RR 6.9–11, absolute risk 7.1% over 10 years). , Pleomorphic lobular carcinoma in situ may increase that risk even further. Radial scars are characterized microscopically by a fibroelastic core with radiating ducts and lobules. Radial scars and complex sclerosing lesions carry an 8–15% risk of DCIS or invasive malignancy at the time of excision. , – Radial scars are usually managed by excisional biopsy. There are limited data with which to determine the optimal screening strategy after atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ. Breast MRI may improve breast cancer detection over mammography alone, but it is associated with more biopsies in this population. The National Comprehensive Cancer Network is the only professional society with screening recommendations for those who have had atypical ductal hyperplasia, atypical lobular hyperplasia, or lobular carcinoma in situ : Annual mammography (not before age 30 years). Consider tomosynthesis. Consider annual breast MRI (not before age 25 years). Clinical breast examinations every 6–12 months. Engage in breast self-awareness (women should be familiar with their breasts and report changes to their health care provider promptly).
There have been conflicting data regarding the effect of hormonal contraception on breast cancer risk. A large meta-analysis in 1996 revealed a small increased risk of breast cancer among women with current or recent oral contraceptive use (RR 1.07, SD 0.02, P <.001). Similar findings were noted in a large cohort study in 2017 (RR 1.20, 95% CI 1.14–1.26). The absolute risk was quite small (one additional breast cancer diagnosis for every 7,690 women using hormonal contraception each year). In both studies, breast cancer risk returned to baseline 5–10 years after discontinuing hormonal contraception. , Most studies do not suggest an increased risk of breast cancer among women using a levonorgestrel intrauterine system (IUS) or depo-medroxyprogesterone injections. – There are limited data regarding the etonogestrel implant, but no study to date has demonstrated an increased breast cancer risk. The risks of hormonal contraception must be weighed against the health, social, and economic consequences of unplanned pregnancy, as well as the many noncontraceptive benefits of hormonal contraception. The maternal mortality rate in the United States in 2015 was 26.4 deaths per 100,000 pregnancies, which is comparable with the rate of excess breast cancer diagnoses (13 [95% CI 10–16]/100,000 person years) related to hormonal contraception suggested by the 2017 cohort study. , Hormonal contraception, particularly oral contraceptives, significantly decreases the risk of ovarian and endometrial cancers. , There are no screening guidelines that specifically address exposure to hormonal contraception, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer.
Many fertility treatments cause an increase in circulating estrogen and progesterone levels, which theoretically could increase future breast cancer risk. Most studies have demonstrated no change or a decreased risk of breast cancer after fertility treatments. Few studies specifically evaluated the risk of early-onset breast cancer. Very limited data suggest an increased risk of breast cancer among specific populations, including women exposed to many high-dose cycles of clomiphene citrate and women undergoing in vitro fertilization before age 24 years. , The American Society for Reproductive Medicine states that there is “fair evidence that fertility drugs are not associated with an increased risk of breast cancer (Grade B).” No screening guidelines specifically address fertility treatment exposure, so routine breast cancer screening is recommended in the absence of other risk factors for early-onset breast cancer.
Chest radiation therapy before age 30 years is a well-established risk factor for early-onset breast cancer. , Treatments of concern include mantle radiation for Hodgkin's lymphoma and moderate-dose chest radiation therapy for non-Hodgkin's lymphoma, leukemia, bone malignancies, or pediatric solid tumors (eg, Wilms tumor, neuroblastoma, and soft-tissue sarcoma). The cumulative incidence of invasive breast cancer in these patients is 13–20% by age 40–45 years, similar to that seen among BRCA1 or BRCA2 mutation carriers. – Risk is greatest among women treated with 40 Gy or more, but all women treated with 20 Gy or more are at increased risk for early-onset breast cancer. , , This increased risk is evident 8–10 years after completion of radiation therapy and does not plateau at any point after treatment. – , Early initiation of breast cancer screening is effective for reducing stage at diagnosis in this population. Both mammography and breast MRI are effective screening studies after chest radiation therapy, but mammography has higher specificity. , – Multiple professional organizations have published screening guidelines for women with a history of chest radiation therapy (Table ). There are limited data to suggest superiority of one screening protocol over others. Shared decision making, including the discussion of risks of false positives and negatives, is recommended when deciding on a screening strategy.
Breast cancer survivors remain at risk for a second breast cancer, but the risk for a second early-onset breast cancer among young breast cancer survivors is unknown. Among survivors of any age without a known cancer gene mutation, the risk of a second breast cancer is approximately 3% and 7% at 10 and 15 years after diagnosis, respectively. There are no data regarding risk of early-onset breast cancer in women with ovarian cancer in childhood, adolescence, or early adulthood. After breast cancer treatment, survivors require clinical and imaging follow-up to assess for recurrence and second malignancies. Both the National Comprehensive Cancer Network and the European Society of Breast Cancer Specialists recommend annual mammograms starting 6–12 months after completion of treatment. , Breast MRI should be considered in patients at high risk for a second cancer (eg, BRCA1 or BRCA2 mutation carriers) , (see Appendix 6 [ http://links.lww.com/AOG/B869 ] for complete evidence summary).
Objective measures of health disparities are well established, and health disparity populations exhibit differences in rates of mammography screening, age at breast cancer diagnosis, stage at time of diagnosis, and rates of cancer treatment. African American women are significantly more likely to experience higher mortality from breast cancer compared with white women (Fig. ). Other health disparity groups, such as American Indians and Alaska Natives, Asians, Hispanics, and Native Hawaiians and other Pacific Islanders, are affected but often inadequately studied, as are sexual and gender minority persons. The increased incidence of more-aggressive tumor types only partly explains the survival gap for black women. , Social determinants of health, such as systemic racism, poverty, and the environment, greatly affect cancer screening rates and outcomes. Health literacy, childcare concerns, financial difficulties, and transportation affect the likelihood of receiving preventive health services such as mammography. , Geography is a particularly important factor. Rural women are more likely to live in poor counties, with greater barriers to accessing primary care. Poverty or lack of a regular primary care provider who recommends mammography is highly predictive of not being screened. , In general, poverty status correlates with more advanced stage at diagnosis, receiving less aggressive treatment, and higher risk of all-cause mortality. Physical proximity to urban centers is not a panacea. In 2014, African American women with breast cancer in Georgia living in isolated rural areas were 45% more likely to die than white women, whereas African American women living in urban areas were 24% more likely to die than white women. Provider-level bias and discrimination in breast cancer care treatment exist. For example, when genetic testing is indicated, African American women are less likely to be referred for genetic testing for pathogenic variants than white women. , African American women are also less likely to receive any type of lymph node surgery for axillary staging overall. Women of lower socioeconomic status are adversely affected by lack of health insurance coverage. Cost affects primary care utilization and is a factor in patient decision making regarding mammography. By one estimate, up to 37% of the mortality difference in breast cancer among black compared with white women can be attributed to disparities in health insurance. Intensive focus on modifiable system factors would be beneficial, such as expanding insurance coverage, addressing transportation barriers to appointments, and increasing access to primary care. The use of patient navigators and advocates, translator services, and tracking systems across different health systems could reduce the effect of limited health literacy, mistrust, and negative prior experiences with health care. General practitioners who provide counseling and recommendations on health care preventive services can improve the rates of mammography for underscreened groups, such as recent immigrants. Bias by health care providers and health systems leading to disparate rates of services offered to patients should be corrected, and to further decrease differences in mortality, emphasis should be placed on ensuring equal treatment after diagnosis. Groups such as the Black Women's Health Imperative are at the forefront of working to reduce these disparities, and can serve as a resource for both patients and health care providers. Efforts to promote quality improvement and adherence to national guidelines are important. Breast cancer incidence is higher in younger African American women and other ethnic groups. In contrast, among postmenopausal women, breast cancer incidence is highest in white women. The proportion of breast cancer diagnoses by age for nonwhite patients with breast cancer peaks in the late 40s, whereas diagnosis for white patients peaks in their 60s; this phenomenon is known as the crossover effect (Fig. ). Most breast cancer research has been conducted on white women. Major professional society screening guidelines developed using this body of evidence might not be adequate for nonwhite populations. No national guidelines address this concern, but in 2018, the American College of Radiology commented that women at high risk, particularly black women and those of Ashkenazi Jewish descent, should be evaluated early in life to discuss potential benefit from supplemental screening. Consideration should be given to encouraging screening before age 50 years, especially for African American women (see Appendix 7, available online at http://links.lww.com/AOG/B870 , for complete evidence summary).
Although there are no validated tools or best practices specific to identifying risk factors or estimating the risk of early-onset breast cancer, there are multiple tools that may be helpful to identify short-term risk in younger women. Current best practices aim to identify women at risk of familial cancer syndromes on the basis of family history to determine who may benefit from genetic testing. The three most widely used tools for predicting BRCA gene carrier probability are BRCAPRO, BOADICEA (the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm), and Penn II. BRCAPRO and BOADICEA also provide cancer risk estimates in addition to estimates of likelihood of genetic mutations. These models might be useful to direct women to genetic testing and counseling who are at increased risk of genetic mutations that pose a high risk of early-onset disease. BRCAPRO is a validated statistical program to estimate individual carrier probabilities on the basis of family history. It is not specific to any age range and does not directly estimate the risk of early-onset cancer, but rather the risk of carrying a BRCA1 or BRCA2 mutation. BOADICEA likewise was developed using population data from families in the United Kingdom to create a model based on family history and requires detailed family pedigree. The Penn II model uses clinical questions based on family history to reach a carrier probability, but does not calculate cancer risks. Once a BRCA1 or BRCA2 mutation is identified the Stanford risk-assessment tool for BRCA carriers may aid in decision making for preventive measures based as it provides age-related risk of cancer and compares multiple intervention strategies. Additional widely validated models to assess cancer risk include the Tyrer-Cuzick, modified Gail, and Breast Cancer Surveillance Consortium models. None specifically assess risk of early-onset or premenopausal breast cancer, although most provide estimated 5- or 10-year cancer risk as well as lifetime risk of breast cancer. No models used validation cohorts with patients younger than 20 years. The modified Gail model has been validated in women 35 years and older to assess 5-year invasive cancer risk. The Tyrer-Cuzick model has been studied in women older than age 20 years to assess 10-year cancer risk and has been shown to perform better in women with a family history of breast cancer. The Breast Cancer Surveillance Consortium risk calculator is validated for women older than age 35 years to provide 5- and 10-year risks and includes family history factors as well as breast density in the calculation. There are limited data on the use of these models to specifically address cancer risk reduction in young women. Family history should be collected and updated periodically to identify patients who may be at increased risk of predisposing genetic mutations. Tools that may aid in collecting family history are the Ontario Family History Assessment Tool, Manchester Scoring System, Referral Screening Tool, Pedigree Assessment Tool, and FHS-7. , There is no evidence to recommend one method over another. Those who screen positive or who meet published guidelines for qualifying family histories should be referred for genetic counseling and testing. There are no guidelines or best practices for identifying risk factors or for the use of tools to estimate risk specific to early-onset breast cancer. However, multiple organizations provide guidance for assessing risk of breast cancer in general. The U.S. Preventive Services Task Force advocates use of brief familial assessment tools to assess women with a personal or family history of breast, ovarian, tubal, or peritoneal cancer or who have an ancestry associated with BRCA1 or BRCA2 gene mutations. The U.S. Preventive Services Task Force reviewed six tools that were adequately validated, but found insufficient evidence to recommend one tool over another. Other organizations likewise do not advocate for use of any specific tool. , – National Comprehensive Cancer Network guidelines on breast cancer risk reduction recommend assessing family history and referring to genetic counseling when appropriate as well as use of the modified Gail or Tyrer-Cuzick model to assess risk among women older than age 34 years. The National Comprehensive Cancer Network has also established criteria for genetic testing for high-risk mutations. These guidelines recommend assessment no earlier than age 18 years based on family history. No specific tool is recommended, and the recommendations are not specific to reducing the risk of early-onset cancer (see Appendix 8, available online at http://links.lww.com/AOG/B871 , for complete evidence summary).
Shared decision making is a key component of patient-centered health care, particularly because there is often more than one option for screening. Although patient decision aids and risk calculators help enumerate risk and are adjuncts to shared decision making, the process is more involved. Using narrative risk communication strategies, communicating absolute rather than RR, and managing framing bias are important considerations in communicating risk of early-onset breast cancer. Many decision aids and calculators are directed to specific populations (eg, subtypes or age ranges), but none are specific for communicating risk of early-onset breast cancer. Several tools may be useful: Families Sharing Health Assessment and Risk Evaluation (Families SHARE, a product of the National Institutes of Health's National Human Genome Research Institute) is a decision aid that is useful for shared decision making for individuals of varied age groups and can be used within and outside of an office setting. Breast Screening Decisions (developed collaboratively by the Weill Cornell Medical College and Sloan Kettering Cancer Center) is directed to women aged 40–49 years. Breast Cancer Screening (PDQ) has both a patient and health care provider tool, which can be used as companion documents. The University of Wisconsin School of Public Health's Health Decision tool was originally created and tested at the University of California, San Francisco. – It includes a breast cancer screening module that can be integrated into some electronic health record systems. Studies of decision aids for breast cancer prevention in BRCA1 and BRCA2 mutation carriers demonstrated that cancer-related distress was reduced among those who used a decision aid compared with those who did not. Decisional conflict did not change with use of the aid. , The following tools may be useful for women at high risk of hereditary breast and ovarian cancer: The Cancer Risk Education Intervention Tool is a web-based (noninteractive) adjunctive tool for use in low socioeconomic settings and among ethnically diverse women. The Stanford Shared Decision Making Tool for women with BRCA1 or BRCA2 was developed to guide decision making about screening and treatment based on calculated risk. For minority groups, the Health Belief Model was used as a construct for developing a school-based classroom and online tool that increased knowledge about breast cancer risk among African American women aged 20–39 years. Because we anticipated that a literature search would find limited information specific to communicating risk of early-onset breast cancer, we deliberately conducted a broad search encompassing other aspects of breast cancer and other cancers and health conditions. Patient decision aids for colorectal cancer screening have been shown to improve knowledge and interest in screening compared with no information, but are no better than general colorectal cancer screening information. Healthwise Knowledge Base is an evidence-based interactive platform to inform patients about mammogram initiation that includes a shared decision making breast cancer screening tool for women aged 40–50 years (see Appendix 9, available online at http://links.lww.com/AOG/B872 ), as well as a tool for assisting in decisions about BRCA testing. The user's concerns, desires, and fears are weighed in response to evidence provided about the risks and benefits of screening, and a score indicating preferences and readiness for screening is calculated. A decision analytic model was used to improve estimation of benefits and risks for patients undergoing thrombolysis, with the added benefit that this computerized decision aid can be embedded in an electronic health record. This approach could be translated to support integration of the Gail or Families SHARE model, for example, into a primary care or a woman's personal electronic health record. There are no current major professional society or health services guidelines about communicating the risk for early-onset breast cancer. Shared decision making has been endorsed by ACOG for deciding the age at which to initiate breast cancer screening. The American College of Obstetricians and Gynecologists acknowledges the importance of screening for social determinants of health in all patients, as these factors may influence decision making and communication. U.S. Preventive Services Task Force guidelines do not address early-onset breast cancer risk, except to state that the recommended screening guidelines do not apply to women with prior chest radiation or known underlying genetic mutations such as BRCA1 or BRCA2 . National Institute of Health Care Excellence guidelines recommend providing information and support for decision making, but do not recommend any specific tool or decision aid. National Institute of Health Care Excellence guidelines regarding familial breast cancer also recommend the use of shared decision making, materials, and decision aids as well as standardizing the discussion involved in counseling patients and families at risk for familial breast cancers (see Appendix 9 [ http://links.lww.com/AOG/B872 ] for complete evidence summary).
There is limited evidence for risk modification specific to the outcome of early-onset breast cancer. The evidence for risk reduction among younger women is most robust for BRCA mutation carriers. Risk-reducing bilateral mastectomy should be considered in women with a genetic mutation conferring a high risk of breast cancer. There are no guidelines or studies addressing the age at which risk-reducing mastectomies should be undertaken. Age-related risk estimation tables may be useful to counsel women with BRCA mutations on the timing of prophylactic procedures. There is no evidence supporting risk-reducing mastectomies for women with low-risk genes or whose risk is based on nonhereditary factors alone. We found no evidence to support oophorectomy for the purposes of preventing early-onset breast cancer. The use of bilateral salpingo-oophorectomy to prevent lifetime risk of breast and ovarian cancer has been estimated to be as high as 50% for BRCA1 and BRCA2 carriers, although more recent publications question these results. There are no guidelines or studies about the use of risk-reducing agents expressly for the purpose of reducing the risk of early-onset breast cancer. Tamoxifen is the only agent indicated for use in premenopausal women at increased risk of breast cancer, and is recommended for women with 5-year risk of 1.7% or higher. The risks and benefits in women younger than 35 years is not known. Most large trials of chemoprevention were performed in older women who had completed menopause. The National Surgical Adjuvant Breast and Bowel Project P-1 trial found a 44% decrease in cancer among women younger than 50 years treated with tamoxifen for chemoprevention. There are limited data regarding the magnitude of risk reduction with the use of tamoxifen for BRCA1 and BRCA2 mutation carriers or women with prior thoracic radiation. However, cohort data suggest there might be a benefit for BRCA2 carriers; the National Surgical Adjuvant Breast and Bowel Project P-1 study showed a nonsignificant 62% decrease relative to placebo (RR 0.38, 95% CI 0.06–1.56). , Although other European studies have shown mixed effects, this overall reduction is supported by a systemic review of randomized controlled prevention trials across all studied populations showing a 44% decrease in the risk of breast cancer for women younger than 50 (hazard ratio 0.66, 95% CI 0.52–0.85). There is limited evidence for the modification of health behaviors to reduce the risk of early-onset breast cancer. A recent meta-analysis assessed numerous risk factors for BRCA carriers. Later age at the time of first live birth was associated with a decreased lifetime risk of breast cancer for BRCA1 carriers (effect size for women aged 30 years or older=0.65 vs women aged younger than 30 years, 95% CI 0.42–0.99). There was no effect of age at first birth for BRCA2 carriers. Breastfeeding also appeared protective for lifetime risk of cancer for BRCA1 carriers, although meta-analysis could not be performed because of study heterogeneity. Reported effects based on case–control studies showed a 32–50% decreased risk if breastfeeding continued for more than 1 year compared with never breastfeeding. Additionally, three or more live births also appeared to have a protective effect for BRCA1 carriers (effect size=0.57, 95% CI 0.39–0.85) as well as BRCA2 carriers (effect size=0.52, 95% CI 0.30–0.86), compared with nulliparity. For BRCA1 or BRCA2 carriers, there were no significant or reliably duplicated results of effects of alcohol consumption, oral contraceptive use, or smoking. , In review articles on risk factors for women at average risk, there was no reliable effect seen for alcohol consumption or modification of other dietary factors for premenopausal breast cancer. , There are no guidelines specific to the prevention of early-onset breast cancer. Those that may be considered relevant address lifetime breast cancer risk reduction, largely among women older than age 35 years. The National Comprehensive Cancer Network recommends tamoxifen, 20 mg/d, for up to 5 years for women aged 35 years and older with a high 5-year risk of breast cancer, defined as a 5-year risk of 1.7% or higher using the Gail model, or prior lobular carcinoma in situ. U.S. Preventive Services Task Force guidelines for reducing the risk of primary cancer state that women at increased risk should engage in shared decision making regarding chemoprevention. The National Comprehensive Cancer Network advises a healthy lifestyle for reduction of risk for breast cancer for all women, though the magnitude of this reduction and whether it reduces the risk of early-onset breast cancer or premenopausal breast cancer is unknown. Elements of healthy lifestyle advised by the National Comprehensive Cancer Network include limited alcohol consumption, vigorous physical activity, maintaining a healthy weight, and breastfeeding (see Appendix 10, available online at http://links.lww.com/AOG/B873 , for complete evidence summary). Breast self-examination is no longer part of major society guidelines for average risk women given the high number of false positives and absence of supportive evidence for benefit. , , Our literature review found no evidence for its use in women at risk for early-onset breast cancer, but women should be counseled to be familiar with their breasts and promptly report changes to their breasts to their health care provider.
Survivorship in women with early-onset breast cancer is a critical component to initial evaluation and treatment as well as ongoing care. Chemotherapy is often and variably responsible for chemotherapy-induced amenorrhea, menopause, or true ovarian failure, resulting in consequences such as infertility or subfertility, bone loss, and increased cardiac risk as well as menopausal symptoms, which can have a significant effect on quality of life. Age at diagnosis, receptor status, and treatment regimen are important considerations in managing ongoing care for women affected by early-onset breast cancer. The National Comprehensive Cancer Network and the American Society of Clinical Oncology have produced comprehensive guidelines for survivorship. , The American Cancer Society and the American Society of Clinical Oncology jointly created survivorship guidelines after systematic review in 2015. Although not specific for early-onset breast cancer, ACOG provides resources about managing gynecologic issues in women with breast cancer, many of which are applicable for women with early-onset breast cancer. The American College of Obstetricians and Gynecologists recommendations include use of nonhormonal interventions for symptomatic patients, because data are conflicting about the deleterious effects of hormone therapy on recurrence and overall survival rates. Although not specific to women with early-onset breast cancer, the North American Menopause Society and the International Society on Women's Sexual Health have co-authored recommendations regarding the treatment of genitourinary syndrome of menopause in women with breast cancer. Management of women who have or have had early-onset breast cancer should include attention to the issues of contraception, fertility, and pregnancy: Effective contraception is often overlooked as part of the treatment regimen for patients with early-onset breast cancer, and family planning consultation should be considered. The copper IUS is the preferred contraceptive method for women with breast cancer, although the levonorgestrel IUS can safely be used in combination with tamoxifen. , The preferred method of emergency contraception is the copper-containing IUS, although progestin regimens can also be used. All women with early-onset breast cancer should have fertility preservation counseling. Oocyte and embryo cryopreservation is considered first-line treatment. Treatment with gonadotropin-releasing hormone agonist during chemotherapy should be considered when ovarian oocyte and embryo cryopreservation is not possible; it affords some protection to the ovary and is associated with increased fertility rates when compared with no treatment. Aromatase inhibitors and gonadotropin-releasing hormone agonist triggers should be used when employing controlled ovarian stimulation for women undergoing fertility treatments with a history of early-onset breast cancer to lower estrogen levels. Prenatal genetic diagnosis should be considered in women with BRCA mutations or other documented germ line mutations undergoing in vitro fertilization procedures. Ovarian tissue harvesting offers a promising alternative to cryopreservation therapies. Pregnancy after a diagnosis of early-onset breast cancer has not been shown to increase the risk of recurrence. When considering timing, pregnancy occurring at least 10 months after breast cancer diagnosis was not found to be harmful and may even contribute to survivorship. When breast cancer is diagnosed in pregnancy, chemotherapy can be safely instituted in the second and third trimesters. See Appendix 1 ( http://links.lww.com/AOG/B864 ) for complete evidence summary.
The evidence review and subsequent stakeholder discussion revealed the following research gaps and opportunities for early-onset breast cancer (see Appendix 11, available online at http://links.lww.com/AOG/B874 , for a more in-depth assessment): Develop risk-assessment tools specific to early-onset breast cancer Optimize integration of risk assessment into primary care visits and electronic health records Obtain data on and determine optimal screening for nonwhite populations Determine risks associated with dense breasts in young women Determine appropriate adjunctive screening for young women with dense breasts Validate epidemiologic data largely based on European populations in U.S. women, including underrepresented subgroups Develop strategies to eliminate implicit bias among health care providers and medical systems Expand screening, genetic counseling, and testing among high-risk women Develop and validate tools for communicating early-onset breast cancer risk to patients Develop and validate training techniques for health care providers to screen, test, and initiate risk-reducing strategies in women at risk for early-onset breast cancer Determine safety and optimal timing of pregnancy after treatment for early-onset breast cancer Optimize fertility preservation in women undergoing treatment for early-onset breast cancer
|
COVID-19 information uptake amongst a rheumatology interested population | 3bac2c40-4dc3-443a-8255-0ed55c3de3f2 | 9995713 | Internal Medicine[mh] | The AlbertaRheumatology.com website was established in 2010 with an intended.audience of those interested in rheumatic disease in the province of Alberta, Canada. In March 2020, information on COVID-19 was first posted. In December 2020, a second page focused on COVID-19 vaccines was posted. Both pages underwent many revisions as the pandemic progressed and more information became available. Throughout this time, patients also submitted questions to the “Ask the Rheumatologist” on the topic of COVID-19, some of which were answered on the website. Google Analytics, a data analytic software, is embedded on the website and tracks the number of views, visit length, and visit geographical location. This data was collected and compared to non-COVID website resources. Ethics approval was waived; the data collected is anonymous and based on public website usage.
Between January 1 2020 and December 31, 2022, COVID-19 resources on the AlbertaRheumatology website had 16,969 webpage visits, representing 3.17% of website page views during the time (total visits = 535,537 out of a total of 115 webpages on the website). Peak visits occurred in March–April 2020 (2325 visits), January to March 2021 (6521 visits), and September 2021 (1021 visits), to account for 58.1% of all COVID related visits (see Fig. ). 9303 (54.82%) of the visits were to the COVID-19 vaccine page and 6663 (39.27%) to the COVID-19 overview page, with the 1003 remaining visits to the ‘Ask the Rheumatologist’ area. Visit length averaged 4:08 min for COVID-19 vaccines and 2:11 min for the COVID-19 overview, compared to an average of only 1:15 min for all pages on the website. 70.0% of visitors to the COVID webpages were from the province of Alberta, 15.4% from other regions of Canada, while the remainder were international, compared to the overall website where only 32.3% of users are from Alberta and 49.5% from Canada (see Fig. ).
The provision of COVID-19 information on the AlbertaRheumatology website appears to have been well received, with nearly 17,000 webpage visits recorded during the study time period. There were three clear peaks of usage noted, which correspond with phases of the COVID-19 pandemic in Alberta. The first peak relates to the first wave of the pandemic, the second peak to newly available vaccines, and the third peak to a significant COVID-19 wave in the province of Alberta along with COVID-19 booster vaccine availability. While other papers have reviewed the quality of COVID online information, their use has not been well described . It can be inferred the target audience of Albertans was successfully reached, with a majority of use being from this geographic location, and significantly higher than other webpages on the site. However, this study cannot determine the demographics of the end-user, how they interpreted the provided information, and if it impacted how they proceeded during the pandemic. While further study is clearly needed, this study suggests that web-based information such as this is worth producing, as engagement was very good amongst the identified geographic target audience.
|
Heterogeneidad de criterios en el diagnóstico de bronquiolitis aguda en España | 3c70ffc2-a521-4d88-b72c-d847bf33b593 | 7105059 | Pediatrics[mh] | La bronquiolitis aguda (BA) es una enfermedad de las vías respiratorias inferiores causada por infecciones víricas, sobre todo debidas al virus respiratorio sincitial (VRS), que es propia de los lactantes y de presentación estacional . Es la causa principal de hospitalización en niños de menos de dos años en todo el mundo, con una tasa anual de hospitalización en España de alrededor de 25 por 1000 niños de esa edad . El cuadro clínico de la enfermedad es conocido desde antiguo, pero no fue hasta 1940 cuando se introdujo el término «bronquiolitis» para denominarlo . Ese nombre se ha convertido en la denominación estándar para referirse a la enfermedad. En 1967 la National Library of Medicine de los EE. UU. incluyó la bronquiolitis como descriptor MeSH («Bronchiolitis, Viral»). El nombre de «bronquiolitis» se aplica actualmente en todo el mundo en cientos de miles de diagnósticos cada año. Sin embargo, todavía existen problemas en su definición . No hay criterios diagnósticos universalmente aceptados, por lo que un mismo cuadro clínico puede ser diagnosticado como BA o recibir otra denominación: asma de lactante, bronconeumonía, episodio de sibilancias, o bronquitis con variados calificativos (espástica, asmática, catarral, etc.). Algunos autores incluso dudan de su existencia como una entidad independiente . Diferentes sociedades científicas, agencias de evaluación y expertos a título individual, han elaborado varios criterios de diagnóstico clínico de la BA, que difieren en aspectos importantes . También las definiciones de caso y los criterios de inclusión utilizados en ensayos clínicos sobre la BA son heterogéneos, y en muchos estudios se consideran elegibles simplemente aquellos lactantes con «signos y síntomas consistentes con bronquiolitis», limitándose los autores a declarar que se aplica una u otra definición «bien aceptada» de BA , . Aparecen también inconsistencias en la práctica clínica. Los criterios diagnósticos de BA son variables, y la etiqueta diagnóstica asignada a un paciente con clínica de BA determina la actitud terapéutica . Las guías recientes establecen que no hay tratamientos farmacológicos eficaces para la BA , , pero los clínicos no siguen las recomendaciones terapéuticas propuestas en las guías . Esto podría deberse a discrepancias en las recomendaciones de diferentes guías , pero también a distintas opiniones sobre lo que es una BA , , . Igualmente, diferentes criterios diagnósticos son causa de controversias en la valoración del pronóstico a largo plazo . Hay muy poca información sobre los criterios diagnósticos de BA que se utilizan en la práctica. Un estudio portugués encontró una importante diversidad, tanto entre médicos generales como entre pediatras, en los criterios empleados . En España, la guía sobre BA editada en 2007 por el Ministerio de Sanidad no incluye una definición de la enfermedad , y en las publicaciones españolas suele citarse alguna de las definiciones más habituales de la literatura internacional. Se desconoce si los pediatras españoles usan criterios uniformes para el diagnóstico de BA. El objetivo del presente estudio es conocer los criterios utilizados para el diagnóstico de BA por los pediatras en España, considerando tanto a expertos en la materia como al conjunto de pediatras que atienden a niños con BA, y analizar posibles causas de su variabilidad.
Se realizaron dos estudios consecutivos. En un primer estudio se investigó si existía un consenso entre expertos españoles acerca del diagnóstico de la BA, por medio de un procedimiento Delphi . Posteriormente se realizó un estudio transversal mediante una encuesta online a pediatras clínicos para conocer sus opiniones sobre el diagnóstico de la BA. En el (disponible en la versión electrónica) se exponen en detalle los métodos que aquí se resumen. Estudio Delphi (consenso de expertos) 1) Se identificaron artículos de consenso, guías de práctica clínica y revisiones que proponían criterios diagnósticos, definiciones o descripciones estandarizadas de la BA ( ). A partir de ellos, se construyó un cuestionario para iniciar el procedimiento Delphi. 2) Formación de grupo de expertos. La condición de experto se definió explícitamente ( ), y se intentó formar un grupo con representación suficiente tanto de las subespecialidades pediátricas relacionadas con la BA como de la geografía española. 3) Desarrollo del proceso. Se remitió a los expertos el cuestionario ( ), en el que había preguntas dicotómicas (sí/no), de opción múltiple, y otras para valorar de cero a diez la importancia de una característica clínica para el diagnóstico. Se organizaron rondas sucesivas en las que las respuestas iniciales se procesaron y reenviaron a los participantes, junto al sumario de las opiniones de todos los expertos y nuevos ítems propuestos por los participantes. Se definieron criterios de consenso y de finalización del proceso, detallados en el . 4) Análisis. Se hizo un análisis descriptivo de los resultados, identificando los ítems en los que se alcanzaba el acuerdo según el criterio definido. Estudio transversal. Opinión de los pediatras clínicos 1) Elaboración de la encuesta. Se construyó una encuesta similar a la utilizada en el estudio Delphi, resumiéndola a aspectos clave ( ). 2) Aplicación de la encuesta. Con la colaboración de la Asociación Española de Pediatría (AEP) y de varias sociedades científicas pediátricas españolas ( ), se informó mediante correo electrónico del proyecto a sus socios, invitándoles a participar en una encuesta online y ofreciendo, como estímulo, la participación en el sorteo de una inscripción a un congreso de la AEP. 3) Tamaño de la muestra. Se hizo un cálculo del tamaño muestral, basado en la estimación de que en España hay unos 10000 pediatras. Bajo el supuesto de máxima indeterminación, nivel de confianza del 95% y precisión en la estimación de un 3%, el tamaño muestral necesario sería de 965 (detalles en el ). 4) Análisis. Se realizó un análisis descriptivo de los ítems de la encuesta y se investigó la asociación entre las respuestas y factores demográficos como edad, género, lugar de residencia y tipo de actividad profesional principal (subespecialidad) con pruebas χ 2 , Mann-Whitney y coeficientes de correlación de Spearman. Con criterios definidos ( ), se identificaron las respuestas coincidentes con dos de las definiciones más utilizadas (McConnochie , NICE ) y con el consenso de expertos alcanzado en el estudio Delphi inicial. Se investigó si las variables estudiadas podían reducirse a un conjunto menor de parámetros mediante un análisis factorial (Anexo 2). Luego, para cada factor identificado se analizó si había diferencias en la puntuación factorial en función de las variables demográficas antes mencionadas, mediante pruebas Kruskal-Wallis y coeficientes de correlación de Spearman. Se consideraron estadísticamente significativas las diferencias con p< 0,05. El proyecto fue aprobado por el Comité Ético en Investigación del Área de Salud del investigador principal.
1) Se identificaron artículos de consenso, guías de práctica clínica y revisiones que proponían criterios diagnósticos, definiciones o descripciones estandarizadas de la BA ( ). A partir de ellos, se construyó un cuestionario para iniciar el procedimiento Delphi. 2) Formación de grupo de expertos. La condición de experto se definió explícitamente ( ), y se intentó formar un grupo con representación suficiente tanto de las subespecialidades pediátricas relacionadas con la BA como de la geografía española. 3) Desarrollo del proceso. Se remitió a los expertos el cuestionario ( ), en el que había preguntas dicotómicas (sí/no), de opción múltiple, y otras para valorar de cero a diez la importancia de una característica clínica para el diagnóstico. Se organizaron rondas sucesivas en las que las respuestas iniciales se procesaron y reenviaron a los participantes, junto al sumario de las opiniones de todos los expertos y nuevos ítems propuestos por los participantes. Se definieron criterios de consenso y de finalización del proceso, detallados en el . 4) Análisis. Se hizo un análisis descriptivo de los resultados, identificando los ítems en los que se alcanzaba el acuerdo según el criterio definido.
1) Elaboración de la encuesta. Se construyó una encuesta similar a la utilizada en el estudio Delphi, resumiéndola a aspectos clave ( ). 2) Aplicación de la encuesta. Con la colaboración de la Asociación Española de Pediatría (AEP) y de varias sociedades científicas pediátricas españolas ( ), se informó mediante correo electrónico del proyecto a sus socios, invitándoles a participar en una encuesta online y ofreciendo, como estímulo, la participación en el sorteo de una inscripción a un congreso de la AEP. 3) Tamaño de la muestra. Se hizo un cálculo del tamaño muestral, basado en la estimación de que en España hay unos 10000 pediatras. Bajo el supuesto de máxima indeterminación, nivel de confianza del 95% y precisión en la estimación de un 3%, el tamaño muestral necesario sería de 965 (detalles en el ). 4) Análisis. Se realizó un análisis descriptivo de los ítems de la encuesta y se investigó la asociación entre las respuestas y factores demográficos como edad, género, lugar de residencia y tipo de actividad profesional principal (subespecialidad) con pruebas χ 2 , Mann-Whitney y coeficientes de correlación de Spearman. Con criterios definidos ( ), se identificaron las respuestas coincidentes con dos de las definiciones más utilizadas (McConnochie , NICE ) y con el consenso de expertos alcanzado en el estudio Delphi inicial. Se investigó si las variables estudiadas podían reducirse a un conjunto menor de parámetros mediante un análisis factorial (Anexo 2). Luego, para cada factor identificado se analizó si había diferencias en la puntuación factorial en función de las variables demográficas antes mencionadas, mediante pruebas Kruskal-Wallis y coeficientes de correlación de Spearman. Se consideraron estadísticamente significativas las diferencias con p< 0,05. El proyecto fue aprobado por el Comité Ético en Investigación del Área de Salud del investigador principal.
Estudio Delphi - Consenso de expertos Se identificaron 66 expertos. No se pudo contactar con 8 de ellos, y 40 respondieron a la invitación a participar. Hubo una amplia representación geográfica y de subespecialidades pediátricas ( ). El ciclo Delphi se detuvo tras dos rondas. Entre la primera y segunda rondas hubo cambios solo en tres ítems: uno consensuado desde el principio (adenovirus como agente causal), uno consensuado en la segunda ronda (diagnóstico en cualquier estación), y otro ítem (edad límite de diagnóstico) en el que el cambio fue hacia la polarización de opiniones (12 meses o 24 meses). En la se muestran las variables en las que se alcanzó el consenso, y en el los resultados íntegros ( ). Ese consenso puede expresarse como: la BA es un primer episodio de dificultad respiratoria y aumento de la frecuencia respiratoria, en cualquier estación del año, y la identificación de virus ayuda en el diagnóstico. Los virus considerados responsables son: VRS, rinovirus, influenza, metapneumovirus, bocavirus, parainfluenza, coronavirus y adenovirus; Mycoplasma no se considera agente causal de BA. Encuesta online La AEP envió la invitación a 8869 direcciones válidas de correo electrónico. Se obtuvieron 1297 respuestas ( ). Hubo representación de todas las Comunidades Autónomas y de todas las subespecialidades pediátricas relacionadas con la BA. En la se muestran las respuestas. Era muy baja la coincidencia con los criterios de McConnochie, NICE o el consenso de expertos del estudio Delphi. La coincidencia con esos estándares no se relacionaba con la subespecialidad, edad, género ni lugar de residencia. Hubo bastantes diferencias según la subespecialidad ( ), pero en todas ellas dominaron estas tres opiniones: episodio único, diagnóstico posible en todas las estaciones y límite máximo de 24 meses para el diagnóstico. Los intensivistas eran los que más limitaban la edad de diagnóstico a los 12 meses, seguidos de neonatólogos y de pediatras de hospital. En cuanto a la importancia de signos/síntomas para el diagnóstico ( ), los más valorados fueron la dificultad respiratoria y el aumento de frecuencia respiratoria (sobre todo por residentes, neonatólogos e intensivistas) y los crepitantes. Tos y sibilancias eran sobre todo apreciados por pediatras de atención primaria. Las opiniones también estaban relacionadas con la edad de quienes contestaban ( ). Las diferencias entre Comunidades Autónomas se limitaban a la frecuencia con que se consideraban importantes los crepitantes (p = 0,035) y las sibilancias (p< 0,001), y a la posibilidad de diagnóstico en todas las estaciones (p = 0,043). Los varones aceptaban con más frecuencia el diagnóstico de más de un episodio (p = 0,002) y consideraban con más frecuencia que era importante la identificación de virus (p = 0,034), sin otras diferencias entre géneros. Se analizaron las correlaciones entre las puntuaciones de cada signo/síntoma ( ). El único coeficiente de correlación fuerte (r = 0,768) se dio entre dificultad respiratoria y aumento de frecuencia respiratoria. La construcción del modelo factorial se expone en el , y el resultado en la . Se identificaron tres factores: «disnea», «catarral» y «auscultación». La edad máxima de diagnóstico, número de episodios, estacionalidad y la identificación de virus no pudieron incorporarse a ninguno de esos factores. Había diferencias significativas en las puntuaciones factoriales según la subespecialidad ( ) para «disnea» (p< 0,001) y «catarral» (p = 0,005). Los residentes, neonatólogos e intensivistas daban puntuaciones mayores en «disnea», mientras que pediatras de atención primaria y pediatras de hospitalización general puntuaban más alto en «catarral». No había diferencias (p = 0,231) relacionadas con la subespecialidad en cuanto a «auscultación». Aparte de una significación marginal (p = 0,049) en «disnea», no hubo diferencias geográficas relacionadas con los factores identificados, ni hubo diferencias relacionadas con el género. La edad se correlacionaba de manera débil, aunque significativa, con las puntuaciones en los tres factores: «disnea» (r = -0,144, p >0,001), «catarral» (r = 0,084, p = 0,002) y «auscultación» (r = -0,077, p = 0,005).
Se identificaron 66 expertos. No se pudo contactar con 8 de ellos, y 40 respondieron a la invitación a participar. Hubo una amplia representación geográfica y de subespecialidades pediátricas ( ). El ciclo Delphi se detuvo tras dos rondas. Entre la primera y segunda rondas hubo cambios solo en tres ítems: uno consensuado desde el principio (adenovirus como agente causal), uno consensuado en la segunda ronda (diagnóstico en cualquier estación), y otro ítem (edad límite de diagnóstico) en el que el cambio fue hacia la polarización de opiniones (12 meses o 24 meses). En la se muestran las variables en las que se alcanzó el consenso, y en el los resultados íntegros ( ). Ese consenso puede expresarse como: la BA es un primer episodio de dificultad respiratoria y aumento de la frecuencia respiratoria, en cualquier estación del año, y la identificación de virus ayuda en el diagnóstico. Los virus considerados responsables son: VRS, rinovirus, influenza, metapneumovirus, bocavirus, parainfluenza, coronavirus y adenovirus; Mycoplasma no se considera agente causal de BA.
La AEP envió la invitación a 8869 direcciones válidas de correo electrónico. Se obtuvieron 1297 respuestas ( ). Hubo representación de todas las Comunidades Autónomas y de todas las subespecialidades pediátricas relacionadas con la BA. En la se muestran las respuestas. Era muy baja la coincidencia con los criterios de McConnochie, NICE o el consenso de expertos del estudio Delphi. La coincidencia con esos estándares no se relacionaba con la subespecialidad, edad, género ni lugar de residencia. Hubo bastantes diferencias según la subespecialidad ( ), pero en todas ellas dominaron estas tres opiniones: episodio único, diagnóstico posible en todas las estaciones y límite máximo de 24 meses para el diagnóstico. Los intensivistas eran los que más limitaban la edad de diagnóstico a los 12 meses, seguidos de neonatólogos y de pediatras de hospital. En cuanto a la importancia de signos/síntomas para el diagnóstico ( ), los más valorados fueron la dificultad respiratoria y el aumento de frecuencia respiratoria (sobre todo por residentes, neonatólogos e intensivistas) y los crepitantes. Tos y sibilancias eran sobre todo apreciados por pediatras de atención primaria. Las opiniones también estaban relacionadas con la edad de quienes contestaban ( ). Las diferencias entre Comunidades Autónomas se limitaban a la frecuencia con que se consideraban importantes los crepitantes (p = 0,035) y las sibilancias (p< 0,001), y a la posibilidad de diagnóstico en todas las estaciones (p = 0,043). Los varones aceptaban con más frecuencia el diagnóstico de más de un episodio (p = 0,002) y consideraban con más frecuencia que era importante la identificación de virus (p = 0,034), sin otras diferencias entre géneros. Se analizaron las correlaciones entre las puntuaciones de cada signo/síntoma ( ). El único coeficiente de correlación fuerte (r = 0,768) se dio entre dificultad respiratoria y aumento de frecuencia respiratoria. La construcción del modelo factorial se expone en el , y el resultado en la . Se identificaron tres factores: «disnea», «catarral» y «auscultación». La edad máxima de diagnóstico, número de episodios, estacionalidad y la identificación de virus no pudieron incorporarse a ninguno de esos factores. Había diferencias significativas en las puntuaciones factoriales según la subespecialidad ( ) para «disnea» (p< 0,001) y «catarral» (p = 0,005). Los residentes, neonatólogos e intensivistas daban puntuaciones mayores en «disnea», mientras que pediatras de atención primaria y pediatras de hospitalización general puntuaban más alto en «catarral». No había diferencias (p = 0,231) relacionadas con la subespecialidad en cuanto a «auscultación». Aparte de una significación marginal (p = 0,049) en «disnea», no hubo diferencias geográficas relacionadas con los factores identificados, ni hubo diferencias relacionadas con el género. La edad se correlacionaba de manera débil, aunque significativa, con las puntuaciones en los tres factores: «disnea» (r = -0,144, p >0,001), «catarral» (r = 0,084, p = 0,002) y «auscultación» (r = -0,077, p = 0,005).
Principales hallazgos . Los criterios para diagnosticar BA en España son heterogéneos. Entre expertos, existe un consenso de mínimos que no incluye aspectos tan relevantes como la edad máxima a la que el diagnóstico es aceptable. Entre los pediatras clínicos es muy baja la utilización de criterios estándar, pero la mayoría consideran que se debe limitar el diagnóstico al primer episodio y a los primeros 24 meses. La importancia dada a distintos signos/síntomas varía según el tipo de actividad profesional. Especialistas hospitalarios como intensivistas o neonatólogos resaltan el valor de la disnea, mientras que el aspecto catarral de la BA es considerado sobre todo por pediatras de atención primaria. También hay moderadas diferencias relacionadas con la edad del pediatra. Sin embargo, no parece que la heterogeneidad tenga relación con factores geográficos. Interpretación . Hace más de 40 años, McIntosh escribió sobre la bronquiolitis que «todos los clínicos que atienden a niños pequeños saben lo que significa» . Esa certeza parece ahora en duda, si es que en realidad alguna vez existió. Los criterios empleados para el diagnóstico son distintos entre países (por ejemplo, entre Estados Unidos y el norte de Europa) y según el médico que atiende al paciente . Se sabe que la elección del tratamiento en niños con clínica de BA está asociada a la «etiqueta» elegida para el diagnóstico , incluso se ha discutido que la supuesta ineficacia de los fármacos en la BA se debe a una inadecuada definición de la BA . En el estudio Delphi con expertos, la mayor discrepancia fue la edad máxima para el diagnóstico. La segunda ronda, en vez de aumentar el consenso, polarizó las posturas (entre los 12 y los 24 meses). El mismo antagonismo se ha observado en el Reino Unido . Esto es alarmante, ya que la edad es el factor que más influye para asignar un diagnóstico de BA en un paciente con signos y síntomas compatibles . Todas las definiciones comúnmente usadas limitan el diagnóstico de BA a los lactantes, y varias establecen el límite superior en los 24 meses , , , criterio con el que coinciden dos tercios de los pediatras españoles que respondieron a la encuesta online. Otras definiciones, sin embargo, proponen un límite de 12 meses , , , , , y algunas insisten en que la BA ocurre sobre todo en niños menores de seis meses , , . Un límite de edad menor se suele encontrar en definiciones procedentes de Escandinavia , , . Entre las escasas evidencias que pueden contribuir a resolver esta cuestión, es de reseñar que en un estudio español con casos incidentes y base comunitaria se ha identificado un fenotipo específico de sibilancias caracterizado por un único episodio ocurrido típicamente antes de los 13 meses, y con máxima incidencia a los 7 meses . Por otra parte, tanto los expertos como los pediatras clínicos apoyaron que solo puede denominarse BA a un primer episodio. McConnochie incluyó ese criterio por primera vez en su definición de BA en 1983. Bastantes guías clínicas lo incorporan ahora en sus definiciones , , , . Otras solo advierten de que los episodios repetidos deberían sugerir otros diagnósticos, como asma o «sibilancias inducidas por virus» . En muchos ensayos clínicos participan solo lactantes con un primer episodio, para evitar que sus criterios de inclusión puedan discutirse . También es interesante comentar el acuerdo que existe en España respecto a que la BA puede diagnosticarse en cualquier estación del año, pese a que los niños hospitalizados por BA durante la epidemia invernal tienen características diferenciales: es más frecuente que tengan infección por VRS, su gravedad es mayor, tienen más antecedentes de exposición al tabaco durante la gestación, es menos probable la historia familiar de asma y tienen menor eosinofilia sanguínea . Algunas definiciones sí que incluyen la estacionalidad entre las características importantes para el diagnóstico , , . En la encuesta online hemos identificado tres factores relativos a las manifestaciones clínicas de la BA, que hemos denominado disnea, catarral y auscultación. De ellos, el que explica una mayor parte de la variabilidad diagnóstica es la disnea. Esos factores tienen un peso desigual entre las subespecialidades pediátricas. Creemos que esas diferencias se deben, en parte, al diferente espectro de gravedad de la BA atendida en cada ámbito asistencial. Quizá lo más sorprendente de nuestro estudio es la coincidencia en considerar a la BA como una enfermedad específica, en contraste con el escaso acuerdo que generan sus rasgos distintivos. En los últimos años, está cambiando el modo en el que se entienden las enfermedades de las vías aéreas, en el sentido de desagregar sus componentes e identificar fenotipos y endotipos que puedan tener diferente respuesta al tratamiento . La BA no queda al margen de ello. Varios fenotipos de BA grave han sido ya identificados y continúa la investigación en este campo . En este sentido, algunos datos hacen pensar que la BA debida a infección por rinovirus tiene características clínicas y epidemiológicas particulares , y un diferente pronóstico a largo plazo en cuanto a su relación con el asma atópico , . La etiología vírica se destaca en las descripciones de la enfermedad o se incorpora específicamente en la definición en algunas guías , , , , , , y muchos ensayos clínicos incluyen solo niños con infección por VRS . Las guías no recomiendan la identificación rutinaria del agente causal. Quizá la detección del VRS facilite que el clínico se decida por un diagnóstico de BA y un tratamiento de acuerdo a las guías actuales, pero ese beneficio no es claro . Entre los expertos españoles hubo consenso en considerar la identificación viral como útil en el diagnóstico de BA, pero no coincidían en ello los pediatras clínicos. Nuestros resultados pueden compararse con los de un reciente estudio portugués . En aquel, solo el 32% de los pediatras exigían el requisito de primer episodio y el 76% ponían el límite de edad en los 24 meses. Los pediatras portugueses daban menos importancia para el diagnóstico de BA a esos dos aspectos que los médicos generales, que también se incluían en aquel estudio. Limitaciones . El método Delphi puede aplicarse de maneras distintas. Hemos seguido recomendaciones internacionales en cuanto a la selección de expertos y criterios definidos para terminar las rondas de consultas y para aceptar la existencia de consenso. Como cualquier opinión de expertos, el resultado del consenso no se basa en pruebas firmes, lo que es una limitación compartida por todos los criterios diagnósticos de BA que existen actualmente. En la encuesta online hay un sesgo de participación selectiva, que se refleja en que casi el 94% dice atender con frecuencia a niños con BA. Este sesgo, sin embargo, es poco relevante, ya que la población diana del estudio es precisamente la de los pediatras españoles que atiende a niños con BA. Desde luego, la participación habrá sido menor entre los pediatras con menor interés por esta enfermedad; para animar esas respuestas ofrecimos el estímulo del sorteo, aunque es imposible saber cómo de efectivo puede haber resultado. El sistema de captación ha sido imperfecto, debido a las deficiencias que existen en las bases de datos de las sociedades colaboradoras, añadiéndose que muchos pediatras pueden no consultar con frecuencia su correo electrónico. Aunque la representatividad de la muestra es difícil de valorar, es importante que haya habido una representación suficiente de todas las comunidades autónomas y de todas las subespecialidades involucradas en la atención a niños con BA. Por otra parte, las respuestas a la encuesta pueden no coincidir con la práctica clínica, ya que la investigación sobre las prácticas reales queda fuera del alcance de nuestro estudio.
Los criterios diagnósticos de BA considerados por los pediatras en España son heterogéneos, diferentes entre expertos y clínicos, y dependientes de aspectos como la subespecialidad pediátrica. Esto, sin duda, dificulta el seguimiento de las recomendaciones de las guías clínicas. Existen iniciativas, como la de la European Respiratory Society ( https://taskforces.ersnet.org/item/standardizing-definitions-and-outcome-measures-in-acute-bronchiolitis ), que buscan estandarizar el diagnóstico de BA, pero cabe preguntarse si esa estandarización no sería nuevamente arbitraria y con todos los defectos de las definiciones anteriores. Creemos que sería mejor dedicar más esfuerzos a hacer progresar el conocimiento de la heterogeneidad de la BA y otras enfermedades respiratorias, usando etiquetas diagnósticas ajustadas a endotipos bien diferenciados e identificables.
Fundación Ernesto Sánchez Villares (Proyecto 02/2017).
Los autores declaran no tener ningún conflicto de intereses.
|
Heat stroke: knowledge and practices of medical professionals in pediatric emergency medicine departments – a survey study | e9056a27-9dd0-45f8-b7d1-52e31f6a43b5 | 8173899 | Pediatrics[mh] | Heat stroke, a life-threatening condition may occur in children and requires rapid cooling to ensure best chance of survival.
Health care workers knowledge on cooling treatment and practices were found to be varied. Certified PEM physicians and simulation may aid in implementing proper management of patients with heat stroke.
Heat stroke is a life-threatening condition clinically diagnosed as a severe elevation in body temperature with central nervous system dysfunction that often includes combativeness, delirium, seizures, and coma . Heat stroke occurs because of high external temperatures or physical exertion, leading to motor vehicle related hyperthermia in children (MVRHC) and exertional heat stroke in athletes in hot environments respectively. Children left in cars for even short time periods risk death by hyperthermia . In the United States alone, 231 MVRHC fatalities were reported between 1999 to 2007. In 80% of cases, children were left unattended . Similarly in Israel, between 2008 and 2019, over 800 cases of MVRHC were reported, 35 of which ended in a fatality . Israel’s average temperature in the summer months typically range between 35 and 40 degrees Celsius and the temperature in a closed car in the middle of the day can reach 70 degrees Celsius . As a result, the pediatric emergency department (PED) should be prepared for treatment of heat stroke in those conditions. The prognosis of heat stroke in patients is directly related to the degree of hyperthermia and its duration. Therefore, besides prevention, the most important feature in the treatment of heat stroke is rapid cooling . The method of rapid cooling has long been debated in medical practice. Data mainly from research of exercise-induced hyperthermia suggests that optimal cooling includes submersion in ice or tepid water (1–16 °C), if readily available in the emergency department (ED) . Although several sports organizations suggest using ice-water or cold water immersion for the treatment of patients with heat stroke, basic research studies have shown that evaporative cooling are equally effective . The accepted preferred method of evaporative cooling in the ED as listed by the reference text book endorsed by the Israeli Society of Pediatric Emergency Medicine (PEMI), is actively cooling the patient by spraying him with water and positioning fans to blow air across the body . The historical differences in clinical approach and in the evidence based data, the difficulty to perform prospective studies on children and taking into account that MVRHC is a rare condition in ED, make it a particularly relevant topic to practice using medical simulation. Being a true medical emergency, the approach to treatment of heat stroke must be fast and effective. This study aimed to assess medical professionals’ knowledge of treatment of heat stroke, and ED preparedness to do so, expressed by appropriate supplies and equipment. This study is the first attempt to test the preparedness of PEDs for treating heat stroke in Israel.
We conducted a cross-sectional survey to assess hyperthermia management practices and available resources in all Israeli EDs that accept children. An online questionnaire (Hebrew version: https://forms.gle/xyjfehHx1iEu5eJw7 , English version: https://forms.gle/Pzdem8oMjuKJJNYz6 ) was developed. The questionnaire was designed by the authors after compiling a list of all types of management strategies for heat stroke, by consulting with research literature and with experts of pediatric trauma. The survey questions covered specific procedures and protocols for heat stroke management, based on two clinical scenarios (MVRHC and exertional heat stroke). Once the online version of the survey was developed in Google forms, it was piloted by two pediatric emergency medicine (PEM) physicians and two head ED nurses, to assess its relevance, its usability, and the total time for completion. The feedback from these participants was incorporated into subsequent modifications. We contacted the department head and head nurse of each ED or PED in all public hospitals across the country on May 172,020, and asked them to distribute the survey to their staff members. A reminder was sent at least two more times within the study period via WhatsApp and email. Participants were told that the purpose of the survey was to gain a better understanding of treatment of hyperthermia in EDs that accept children. Answers were collected between June 17th and August 17th, 2020. We included any HCWs that were employed in an ED that accepts children. We excluded any uncompleted form. This study received an ethical waiver from the institutional review board, as it did not use patients or patient data.
In total 210 questionnaire responses were received. 2 were excluded as they were incomplete. Data from 208 medical professionals (physicians and nurses) aged 25 to 58 were analyzed. Of a total of 22 EDs who care for children in Israel, 20 EDs were represented within the responder group, covering all geographic areas of the country. At the time of the survey there were approximately 40 PEM physicians actively working in PEDS, 20 pediatricians, 400 pediatric residents and 270 nurses. Response rates varied between groups, the highest response rate was the PEM physician group 31/40 (78%), the lowest among nurses 62/270 (23%). The majority of the responders were physicians, largely composed of pediatric residents (40%), and including 15% PEM specialists. The main characteristics of the responders are presented in Table including their experience with real life cases and simulation. Of the 30% who have ever treated a patient with exertional heat stroke, 73% felt the treatment was adequate. Twenty one percent of the responders treated MVRHC in the past, of them 84% reported they felt the treatment was adequate for MVRHC. When presented with a scenario of an infant with MVRHC, 125 (60%) listed cool water of any temperature with a fan as the primary mode of cooling, which is considered the preferred method of evaporative cooling in the PED. Thirty nine (19%) responded cooling with ice packs, 27 (13%) responded with ice bath, 13 (6%) cold intravenous fluids and one requested peritoneal lavage with cold fluids. When given a scenario on exertional heat stroke, 83 (40%) listed cool water of any temperature with a fan as the primary mode of cooling which is considered the preferred mode of cooling. Sixty eight (33%) responded cooling with cold intravenous fluids, 30 (14%) responded with ice packs, 10 (5%) responded with ice bath, and 14 (7%) requested peritoneal lavage with cold fluids. Knowledge of management was assessed for differences between the responders’ specialty and for influence of participation in simulations (Table ). PEM physicians were more likely to answer correctly when asked what the treatment would be for a MVRHC (26/31 PEM physicians answered correctly vs. 58/115 non-PEM physicians, p = 0.00001). Similarly, PEM physicians were more likely to answer correctly when asked about the treatment for exertional heat stroke (20/31 PEM physicians answered correctly vs. 27/115 non-PEM physicians, p = 0.0001). The responders who participated in a hyperthermia related simulation had a higher percentage of correct answers when asked about the exertional heat stroke scenario (34/58 vs. 49/150 P = 0.0009). When considering treatment logistics in heat strokes scenarios, of the 125 responders who wished to use a fan, 21 stated they didn’t have a fan, 13 didn’t know where the fan was stored and 41 weren’t sure if they had a fan. Three respondents answered they would use ice in the resuscitation of one or both scenarios but did not know where the ice was located.
This study describes the variability in both theoretical and practical knowledge in treating heat stroke in Israeli EDs. It also identifies the differences between PEM physicians and non-PEM physicians and highlights opportunities for improvements including changes in policy. While good basic knowledge and strategies are reported available in most EDs, there are some important deficits that can be easily implemented to ensure more effective treatment of heat stroke. A relatively low number of participants in our survey reported ever treating hyperthermia. The medical teams’ limited exposure to heat stroke cases, require educational programs such as medical simulation to be developed. The goal should be to improve both theoretical and practical knowledge and to increase level of comfortability among medical teams, with heat stroke management algorithm and cooling techniques. Simulation-based training has been shown to be an effective tool that provides a controlled learning environment in which to practice a wide range of clinical scenarios . Simulation programs have been shown to make a significant improvement in specific scenarios, especially for scenarios that are rare and critical, such as pediatric cardiac arrest patients . In our survey, a low number of physicians and nurses reported having ever participated in any simulation of hyperthermia (28%). This group was found to have significantly higher rates of acceptable management of the teenage exertional heat stroke scenario, but not in the infant MRVHC scenario. This may be the result of the low rate of simulation participation, resulting in lack of true impact of simulations in our population. Evaporative cooling was the most frequently selected primary mode of cooling in both scenarios, however it wasn’t the management preferred in the majority of responses in the exertional heat stroke scenario. Despite traditional teaching that intravenous fluids cooled may precipitate arrhythmias, 33% of responders choose this answer. As expected, certified PEM physicians had significantly higher rates of correct management in both scenarios, compared to non-PEM physicians. The importance of having a PEM physician on site is well documented . We recognized not only the knowledge gap in terms of choosing the right cooling method, but also practical barriers in treatment logistics. Personnel were not sure if they have fans in the ED or where they are located. It is extremely important for the medical personnel to be familiar with the equipment in the ED and its operation. All of these goals can be achieved by solidifying a country wide protocol and participating in medical simulations, which provide both theoretical knowledge and practical tools. The present study had several limitations. A sampling bias may have occurred with regards to the proportion of responders. PED teams are very heterogeneous and vary in number from center to center. Despite surveying all Israeli EDs which accept children, the compared groups were small and response rate varied. As a result, the low statistical power has a reduced chance of detecting a true difference in practice. Lastly, we had no details about the timing, the type and quality of the simulations that responders participated in, causing the influence of past simulation experience to be unequal. Further research should be performed with a pre and post simulation assessment of a uniform heat stroke scenario.
The present study highlights shortcomings in the understanding and care of children with heat stroke. Policy change should be urgently made by the Israeli PEM community, via establishment and implantation of appropriate guidelines for treatment of exertional heat stroke. A national protocol written in collaboration with the Israeli Ministry of Health, Israeli Association of Emergency Medicine and other stakeholders should be urgently written. Moreover, an interventional educational program composed of simulations should be established. Based on our experience, HCWs should be mandated to attend simulations of such cases at least once every 2 years. Finally, resources should be allocated to PEDs to have both the equipment needed and the personnel with an emphasis on the presence and guidance of certified PEM physicians. Personnel should have regularly guided sessions as to where they may find resources needed to treat children suffering from hyperthermia.
|
Knowledge, attitudes, and practices regarding blood exposure accidents and eHealth literacy among Tunisian medical students: a cross-sectional study | f41617b6-4c75-40d1-8285-bcf59107e544 | 11895171 | Health Literacy[mh] | Blood exposure accidents (BEA) constitute a major concern for healthcare workers (HCWs), as they represent a significant occupational risk to their health and safety. It is a leading cause of occupational accidents in healthcare settings , and remains a common and life-threatening hazard for HCWs, endangering their lives, particularly because of the risk of viral contamination . BEA is associated with a high risk of infection with many bloodborne viruses . Indeed, the burden of exposure to bloodborne pathogens remains significant for HCWs who are directly exposed to the risk of acquiring infections, the most feared of which are hepatitis B virus (HBV), hepatitis C virus (HCV), and human immunodeficiency virus (HIV) infections . Their severity is related to the ability to induce chronic viremia and the severity of the infections caused . Recent evidence have suggested nursing professionals to be the group most affected by BEA, followed by medical professionals, especially those practicing surgical specialties, and trainees, particularly intern doctors . Indeed, a recent French study reported that almost one in three junior doctors (31.7%) were victims of BEA . The risk transmission of pathogen by an infected person following a sharps or needlestick injury has been well documented . In fact, it has been estimated that the risk of HBV transmission by an infected person following an injury is nearly 30%, 3% for HCV and 0.3% for HIV . This is of particular concern since the world health organization (WHO) has estimated that approximately three HCW are exposed to BEA to bloodborne pathogens each year . In this way, a recent Tunisian study conducted among HCWs in two university hospitals showed that the main cause of BEA was needlestick injuries, reported in (85%) of cases, with the most common mechanism being needle recapping, which occurred in (78%) of cases . BEA appears to be an occupational hazard with sometimes dramatic but avoidable consequences . As previous research and reports have highlighted the increased awareness of the infectious risk that complicates BEA, numerous surveys of BEA have been regularly conducted among HCWs, but few studies have focused on health care students, particularly undergraduate medical students . In developing countries, including Tunisia, the burden of BEA remains high and although there is strong evidence of the effectiveness and cost-effectiveness of compliance with standard hygiene precautions, including BEA management, such precaution remains inadequate, particularly among undergraduate medical students . BEA prevention is based on adequate knowledge, positive attitudes and appropriate safety practices. In fact, whether a population’s practices are appropriate or not is the result of correct or incorrect attitudes based on the level of knowledge about the phenomenon in question . However, in order to acquire adequate knowledge, positive attitudes and appropriate practices, the education of medical students in the culture of safe care seems to be crucial in the health care field. Nowadays, one of the most effective ways to provide this education is through e-learning. The combination of a robust education in safe care culture and mastery of digital health literacy prepares undergraduate medical students to provide quality care while minimizing risk to patients . Over the past two decades, undergraduate medical students have tended to use the Internet as their primary source of health-related information . As a result, they can easily obtain and learn many types of information about various aspects of human life, both for their daily lives and for their studies, as the Internet encourages anytime, anywhere learning activities . However, despite having access to a wide range of learning resources and information, medical students can be overwhelmed by this vast amount of information . Therefore, the ability to search for, locate, understand and evaluate health information from electronic resources in order to apply this knowledge appropriately to address or solve health problems, and to promote and maintain good health is known as electronic health literacy “e-health literacy” . It is crucial that medical students have a high level of e-health skills, as it is during this period of their lives that they develop lifelong learning skills and gain experience . In fact, medical students are not only using health information to manage their own health, they are also learning to use this information for their medical studies and to create care plans. Online learning has become increasingly popular with the Covid pandemic . In the context of BEA, e-health literacy has its full value in enabling medical students to access the most up-to-date information on the prevention, management and treatment of BEA. Emerging literature and evidence suggest that e-health literacy is closely related to medical students’ knowledge, attitudes, and practices . Several studies have been conducted to assess HCWs’ knowledge, attitudes, and practices regarding BEA, but to our knowledge, no study has investigated this topic among medical students and its relationship with health e-literacy. The objective of this study was to assess the knowledge, attitudes and practices as well as the level of e-health literacy towards BEA and its potential associations among medical students at a Tunisian medical school.
Study design, participants and sitting We conducted a comprehensive cross-sectional study among undergraduate medical students at Ibn Al Jazzar University of Medicine in Sousse Governorate, Tunisia, during a period from January to May 2023 of the academic year 2022–2023. Ibn Al Jazzar University of Medicine was chosen as the study site because the faculty of medicine of Sousse chosen because it is one of the four faculties of medicine of Tunisia and its location in the center east of the country. The academic curriculum at Ibn Al Jazzar University of Medicine is five years in total. The first and second years constitute the pre-clinical phase “first cycle”, while the third, fourth and fifth years constitute the clinical phase “second cycle”. Participants eligible to participate in this study were undergraduate medical students from the second to the fifth year. First-year students were excluded from this study mainly due to their non-exposure to clinical environments which occurs from their summer internship hence their lack of practical clinical training at the time of data collection. Given the nature of blood exposure accidents, which often occur in hands-on clinical settings, first-year students have no interaction with these situations, so they lack the skills or experience to respond to the questionnaire. Measurement and variable description Electronic Health Literacy Scale (eHEALS) E- Health literacy skills were measured using the validated “Electronic Health Literacy Scale” (eHEALS) in its English version and translated into French . This scale assesses users’ knowledge and perceived ability to locate, evaluate, and apply digital health information to answer health questions. The scale consists of 8 items measuring e-literacy on a 5-point Likert scale (ranging from 1 = strongly disagree to 5 = strongly agree), which are summed to give a score of 8 to 40, and 2 additional items related to the importance and usefulness individuals attach to the Internet in making decisions about their health. Although these two items are not counted in the final score. High e- health literacy was defined by an above average e-health literacy score . Measure participants’ knowledge, attitudes, and practices We used a self-administered questionnaire that was pre-tested and validated by experts in infection prevention and control and occupational medicine with reference to the literature; the knowledge subscale included 11 items, while the attitudes and practices subscales included 14 and 2 items, respectively. Knowledge of standard precautions refers to the understanding and application of infection prevention and control measures designed to protect healthcare workers and patients from the transmission of pathogens. Scores are calculated by adding the correct answers: If the answer to a question is yes or no, the correct answers are counted. For multiple choice questions, only the correct answers are counted. For each of the sub-sections namely “ Knowledges, Attitudes and Practices, a good level was defined as a score above the average . Data collection The questionnaires were distributed at the beginning of the tutorial sessions, with the prior consent of the teacher in charge of the session. The questionnaires were completed and collected on site. Participants were given 5 to 10 min to complete the questionnaire. Clear instructions were given at the beginning to ensure accurate and consistent responses. These instructions included guidance on how to read and answer each question, as well as clarification of the purpose of the study to minimize misunderstandings and ensure that participants felt comfortable and confident in their responses. Statistical analysis Data entry and analysis were performed using IBM SPSS Statistics (version 26). For quantitative variables, data were presented as means ± standard deviations if the distribution was normal. Categorical variables were presented as frequencies and percentages and compared using the chi-squared test when validity conditions allowed. Univariate analyses were performed to examine the various associations. The significance threshold was set at 5%, and the strength of the association was estimated by calculating the odds ratio (OR) and its 95% confidence interval (CI). Multivariate analyses were performed to examine the different associations using top-down stepwise binary logistic regression. All variables with a p-value less than or equal to 20% in the univariate analysis were included in this multivariate analysis. The significance threshold was set at 5%, and the strength of the association was estimated by calculating the adjusted odds ratio (ORa) and its 95% CI.
We conducted a comprehensive cross-sectional study among undergraduate medical students at Ibn Al Jazzar University of Medicine in Sousse Governorate, Tunisia, during a period from January to May 2023 of the academic year 2022–2023. Ibn Al Jazzar University of Medicine was chosen as the study site because the faculty of medicine of Sousse chosen because it is one of the four faculties of medicine of Tunisia and its location in the center east of the country. The academic curriculum at Ibn Al Jazzar University of Medicine is five years in total. The first and second years constitute the pre-clinical phase “first cycle”, while the third, fourth and fifth years constitute the clinical phase “second cycle”. Participants eligible to participate in this study were undergraduate medical students from the second to the fifth year. First-year students were excluded from this study mainly due to their non-exposure to clinical environments which occurs from their summer internship hence their lack of practical clinical training at the time of data collection. Given the nature of blood exposure accidents, which often occur in hands-on clinical settings, first-year students have no interaction with these situations, so they lack the skills or experience to respond to the questionnaire.
Electronic Health Literacy Scale (eHEALS) E- Health literacy skills were measured using the validated “Electronic Health Literacy Scale” (eHEALS) in its English version and translated into French . This scale assesses users’ knowledge and perceived ability to locate, evaluate, and apply digital health information to answer health questions. The scale consists of 8 items measuring e-literacy on a 5-point Likert scale (ranging from 1 = strongly disagree to 5 = strongly agree), which are summed to give a score of 8 to 40, and 2 additional items related to the importance and usefulness individuals attach to the Internet in making decisions about their health. Although these two items are not counted in the final score. High e- health literacy was defined by an above average e-health literacy score .
E- Health literacy skills were measured using the validated “Electronic Health Literacy Scale” (eHEALS) in its English version and translated into French . This scale assesses users’ knowledge and perceived ability to locate, evaluate, and apply digital health information to answer health questions. The scale consists of 8 items measuring e-literacy on a 5-point Likert scale (ranging from 1 = strongly disagree to 5 = strongly agree), which are summed to give a score of 8 to 40, and 2 additional items related to the importance and usefulness individuals attach to the Internet in making decisions about their health. Although these two items are not counted in the final score. High e- health literacy was defined by an above average e-health literacy score .
We used a self-administered questionnaire that was pre-tested and validated by experts in infection prevention and control and occupational medicine with reference to the literature; the knowledge subscale included 11 items, while the attitudes and practices subscales included 14 and 2 items, respectively. Knowledge of standard precautions refers to the understanding and application of infection prevention and control measures designed to protect healthcare workers and patients from the transmission of pathogens. Scores are calculated by adding the correct answers: If the answer to a question is yes or no, the correct answers are counted. For multiple choice questions, only the correct answers are counted. For each of the sub-sections namely “ Knowledges, Attitudes and Practices, a good level was defined as a score above the average .
The questionnaires were distributed at the beginning of the tutorial sessions, with the prior consent of the teacher in charge of the session. The questionnaires were completed and collected on site. Participants were given 5 to 10 min to complete the questionnaire. Clear instructions were given at the beginning to ensure accurate and consistent responses. These instructions included guidance on how to read and answer each question, as well as clarification of the purpose of the study to minimize misunderstandings and ensure that participants felt comfortable and confident in their responses.
Data entry and analysis were performed using IBM SPSS Statistics (version 26). For quantitative variables, data were presented as means ± standard deviations if the distribution was normal. Categorical variables were presented as frequencies and percentages and compared using the chi-squared test when validity conditions allowed. Univariate analyses were performed to examine the various associations. The significance threshold was set at 5%, and the strength of the association was estimated by calculating the odds ratio (OR) and its 95% confidence interval (CI). Multivariate analyses were performed to examine the different associations using top-down stepwise binary logistic regression. All variables with a p-value less than or equal to 20% in the univariate analysis were included in this multivariate analysis. The significance threshold was set at 5%, and the strength of the association was estimated by calculating the adjusted odds ratio (ORa) and its 95% CI.
General characteristics of the study respondents A total of 580 medical students were enrolled (overall response rate of 54.10%) with a mean age of 21.6 ± 1.2 years and female predominance (73.3%) with a sex ratio of 0.36. Four out of five students (79.1%) were in their second cycle of medical studies, with (30.2%) enrolled in the fifth year and (25.5%) in the third year. Most of the respondents (70%) had not received BEA training, and almost a quarter of the participants (24.3%; CI 95%:20.7–27.9) reported having been victims of BEA. Regarding HBV vaccination, the vast majority of students (93.3%) had received at least one dose of the vaccine, with half (54.4%) completing their vaccination with two or more doses. Almost half reported having antibodies ≥ 10 IU/ml (Table ). Knowledge, attitudes, and practices regarding blood exposure accident among undergraduate medical students Overall, the mean score for KAP of medical students was 19.98 with almost half of the sample (49.1%, CI95%:45-53.2) achieving good levels of knowledge, attitudes and practices. In terms of individual sections, most participants demonstrated good levels of knowledge and practices (60.7% and 62.1%, respectively) with mean scores of 10.33 out of 17 knowledge items and 4.07 out of 7 practice items. Similarly, the mean score of attitudes was 5.58 out of 11 attitude items with half of the medical students (50.7%) having a good level of attitude. Undergraduate medical students’ e-health literacy using the electronic health literacy scale (eHEALS) The mean eHEALS score for the sampled students was 28.22 ± 6.85. More than half of the sample (55.7%, CI95%:51.7-59.7%) achieved a high level of e-health literacy. Factors associated with the occurrence of blood exposure accidents: univariate analysis Univariate analysis showed that medical students with inadequate knowledge of standard precautions were significantly more likely to be BEA victims (31.2%vs.20.8%, p = 0.006; OR = 1.72 CI 95%: 1.16–2.54). In addition, BEA was found to be significantly more common among students in the second cycle of medicine (27.3% vs.13.2%, p = 0.001; OR = 2.46 CI 95%: 1.4–4.3) as well as among fourth year students. On the other hand, a good practice score and a good KAP score were significantly associated with lower risk of BEA (respectively 20.6% vs. 30.6%, p = 0.006, OR = 0.58 CI 95%: 0.40–0.86; 20.4% vs. 28.2%, p = 0.027, OR = 0.65 CI 95%: 0.44–0.95) (Table ). Univariate analysis of factors associated with e-health literacy among undergraduate medical students Medical students who reported good knowledge of standard precautions had significantly higher levels of e-health literacy (62.8% vs. 52.0%, p = 0.013; Crude OR = 1.56 CI 95%: 1.09–2.21). Similarly, those enrolled in the fourth year had significantly higher levels of e-health literacy ( p = 0.014). In addition, as expected, participants with higher KAP scores were almost twice as likely to have high levels of e-health literacy ( p = 0.004, OR = 1.64 CI 95%: 1.17–2.30; p = 0.004, OR = 1.60 CI 95%: 1.16–2.26; p = 0.02, OR = 1.49 CI 95%: 1.04–2.09 and p = 0.002, OR = 1.67 CI 95%: 1.20–2.32) (Table ). Factors associated with medical students’ knowledge, attitudes and practices regarding blood exposure accident Medical students who reported being victims of BEA had significantly lower levels of KAP (28.2% vs. 20.4%, p = 0.027; OR = 0.65 CI 95%: 0.44–0.95). In addition, although not significantly associated, being adequately informed about the risks of BEA and having received training about BEA were associated with higher levels of KAP among medical students (38.2% vs. 35.3% and 33.7% vs. 26.4%, respectively). Multivariate analysis In fact, the predictors of victims of BEA were: being enrolled in the second cycle of medical studies (aOR = 2.50, 95%CI:1.40–4.30), having inadequate knowledge of standard precautions (aOR = 1.58, 95% CI: 1.04–2.39), while having good knowledge, attitudes and practices regarding BEA was a negative predictive factor for victims of BEA (aOR = 0.59, 95%CI:0.39–0.89). In addition, having adequate knowledge of standard precautions (aOR = 1.57, 95%CI: 1.10–2.23) and having good knowledge regarding BEA (aOR = 1.63, 95%CI:1.17–2.27) were the determinants of high levels of e-health literacy regarding BEA. Similarly, being adequately informed about the risks of BEA (aOR = 1.44, 95%CI:1.01–2.06); was a positive determinant, whereas having a history of BEA (aOR = 0.63, 95%CI:0.43–0.93) was a negative determinant of high levels of knowledge, attitudes and practices among medical students (Table ).
A total of 580 medical students were enrolled (overall response rate of 54.10%) with a mean age of 21.6 ± 1.2 years and female predominance (73.3%) with a sex ratio of 0.36. Four out of five students (79.1%) were in their second cycle of medical studies, with (30.2%) enrolled in the fifth year and (25.5%) in the third year. Most of the respondents (70%) had not received BEA training, and almost a quarter of the participants (24.3%; CI 95%:20.7–27.9) reported having been victims of BEA. Regarding HBV vaccination, the vast majority of students (93.3%) had received at least one dose of the vaccine, with half (54.4%) completing their vaccination with two or more doses. Almost half reported having antibodies ≥ 10 IU/ml (Table ).
Overall, the mean score for KAP of medical students was 19.98 with almost half of the sample (49.1%, CI95%:45-53.2) achieving good levels of knowledge, attitudes and practices. In terms of individual sections, most participants demonstrated good levels of knowledge and practices (60.7% and 62.1%, respectively) with mean scores of 10.33 out of 17 knowledge items and 4.07 out of 7 practice items. Similarly, the mean score of attitudes was 5.58 out of 11 attitude items with half of the medical students (50.7%) having a good level of attitude.
The mean eHEALS score for the sampled students was 28.22 ± 6.85. More than half of the sample (55.7%, CI95%:51.7-59.7%) achieved a high level of e-health literacy.
Univariate analysis showed that medical students with inadequate knowledge of standard precautions were significantly more likely to be BEA victims (31.2%vs.20.8%, p = 0.006; OR = 1.72 CI 95%: 1.16–2.54). In addition, BEA was found to be significantly more common among students in the second cycle of medicine (27.3% vs.13.2%, p = 0.001; OR = 2.46 CI 95%: 1.4–4.3) as well as among fourth year students. On the other hand, a good practice score and a good KAP score were significantly associated with lower risk of BEA (respectively 20.6% vs. 30.6%, p = 0.006, OR = 0.58 CI 95%: 0.40–0.86; 20.4% vs. 28.2%, p = 0.027, OR = 0.65 CI 95%: 0.44–0.95) (Table ).
Medical students who reported good knowledge of standard precautions had significantly higher levels of e-health literacy (62.8% vs. 52.0%, p = 0.013; Crude OR = 1.56 CI 95%: 1.09–2.21). Similarly, those enrolled in the fourth year had significantly higher levels of e-health literacy ( p = 0.014). In addition, as expected, participants with higher KAP scores were almost twice as likely to have high levels of e-health literacy ( p = 0.004, OR = 1.64 CI 95%: 1.17–2.30; p = 0.004, OR = 1.60 CI 95%: 1.16–2.26; p = 0.02, OR = 1.49 CI 95%: 1.04–2.09 and p = 0.002, OR = 1.67 CI 95%: 1.20–2.32) (Table ).
Medical students who reported being victims of BEA had significantly lower levels of KAP (28.2% vs. 20.4%, p = 0.027; OR = 0.65 CI 95%: 0.44–0.95). In addition, although not significantly associated, being adequately informed about the risks of BEA and having received training about BEA were associated with higher levels of KAP among medical students (38.2% vs. 35.3% and 33.7% vs. 26.4%, respectively).
In fact, the predictors of victims of BEA were: being enrolled in the second cycle of medical studies (aOR = 2.50, 95%CI:1.40–4.30), having inadequate knowledge of standard precautions (aOR = 1.58, 95% CI: 1.04–2.39), while having good knowledge, attitudes and practices regarding BEA was a negative predictive factor for victims of BEA (aOR = 0.59, 95%CI:0.39–0.89). In addition, having adequate knowledge of standard precautions (aOR = 1.57, 95%CI: 1.10–2.23) and having good knowledge regarding BEA (aOR = 1.63, 95%CI:1.17–2.27) were the determinants of high levels of e-health literacy regarding BEA. Similarly, being adequately informed about the risks of BEA (aOR = 1.44, 95%CI:1.01–2.06); was a positive determinant, whereas having a history of BEA (aOR = 0.63, 95%CI:0.43–0.93) was a negative determinant of high levels of knowledge, attitudes and practices among medical students (Table ).
BEA poses a significant health and safety risk to healthcare workers. It remains one of the leading causes of occupational accidents and endangers the lives of healthcare workers due to the risk of viral contamination, particularly with HIV, HBV and HCV, pathogens associated with serious infections . Medical students, as future HCWs, are exposed to this risk, but few studies have focused on this specific category, highlighting the need to better understand their level of knowledge, attitudes and practices regarding BEA. In this way, the relationship between KAP regarding BEA and e-health literacy among medical students is an innovative aspect . Indeed, in the digital age, e-health literacy seems crucial for accessing the latest information on the prevention, management and treatment of BEA. The importance of e-health literacy in assessing the credibility of information sources has been emphasized, hence the need to train students in this skill . Accordingly, the main objective of this study was to assess the knowledge, attitudes and practices as well as the level of e-health literacy towards BEA and its potential associations among medical students at a Tunisian Medical school. The current study did not only reveal that medical students tended to have good levels of KAP as well as of e-health literacy but also highlighted the contribution of the academic level, knowledge of standard precautions, KAP on BEA, having a history of BEA in explaining the relationship between KAP as well as e- health literacy levels towards BEA. The current study found that almost a quarter of the students surveyed had been victims of BEA. Of these incidents, a significant majority were attributed to lack of attention. The low rate compared to similar studies can be explained by taking into account the Tunisian context, where venipuncture is mainly performed by nurses, unlike in Western countries such as the USA, where it is performed by medical students . This discrepancy may partly explain the different incident rates. Indeed, health policies and medical education programs differ between countries and regions, which may influence the incidence of such events. In Tunisia, for example, nurses typically receive specific training in venipuncture, which may reduce the incidence of errors among students. In contrast, medical students in Western countries often start performing venipuncture earlier in their training, which may lead to higher error rates due to their lack of experience. In addition, differences in healthcare infrastructure, such as the availability of trained staff and equipment, may affect the incidence of such an incident in different regions and countries. However, it is worth noting that in all these countries, students are exposed to high-risk situations due to the nature of the medical procedures performed and their inexperience, especially at the beginning of their training . BEA was found to be significantly more common among second cycle medical students. In addition, being enrolled in the second cycle of medical studies was found to more than double the risk of being a victim of BEA. Our results are consistent with those of a study conducted in Strasbourg which focused on BEA among medical students, revealing that of third year and fourth year students were victims of BEA during their clinical internship . A plausible explanation for this disparity highlighted in the present study is the non-inclusion of first-year students, as they do not have clinical internships, and the greater involvement of second cycle medical students. However, it is crucial to note that each situation remains unique and that the prevention of BEA cannot be entirely linked to the level of progress in medical studies. Overall, medical students tended to have good levels of KAP, which is roughly in line with similar studies conducted among medical students . In fact, being adequately informed about the risks of BEA was a positive determinant of high levels of knowledge, attitudes and practices among medical students. Our results indicate that (30%) of the students surveyed reported having received training on BEA, and a higher percentage (63.3%) admitted being adequately informed about the associated risks. These figures are in line with a similar study conducted in Mali, which reported a rate of (34.3%) of students reporting having received training on BEA and having good levels of knowledge, attitudes and practices . Likewise, having good KAP regarding BEA was a negative predictive factor for being a victim of BEA. Indeed, the medical students surveyed seem to be aware of the risk of viral transmission so that the risk of being a victim of BEA is lower. In fact, with regard to the risk of HBV and HCV transmission, our results showed that (39.8%) and (28.8%) of students, respectively, were aware of the risk of transmission, which is in agreement with a recent Moroccan study showing that respectively (39.12%) and (49.60%) of participants gave a correct answer . Furthermore, the majority of medical students correctly identified the mechanisms of BEA and its occurrence circumstances. Our results appear to be in accordance with many other surveys’ findings . Medical students with inadequate knowledge of standard precautions were significantly more prone to be BEA victims. Moreover, having inadequate knowledge of standard precautions was found to increase the risk of being a BEA victim by almost twice .According to the present study, most of the students surveyed (65.7%) were unaware of standard precautions regarding BEA which coincides with similar studies carried out in neighboring countries such as Morocco and in western countries such as France revealing that respectively (55.9%) and (67%) of medical students, had no knowledge of standard precautions, and they expressed their ignorance of these measures, both during their training and during their internships . In this context, particular attention should be paid to continuing education and raising awareness of standard precautions among students and HCWs. Indeed, by providing adequate training and regular reminders of the need to follow these safety measures in order to establish a safety culture, thereby reducing the risks and consequences of BEA. Overall, medical students tended to have a good level of e-health literacy with a mean score of 28.22 ± 6.85. It is worth noting that more than half of the sample achieved a high level of e-health literacy. Our findings seem to be roughly in line with recent studies conducted in Iran and Vietnam, suggesting similar e-health literacy scores of 29.22 and 27.03, respectively . Indeed, a study conducted by Tanasombatkul and al. in Thailand revealed a slightly higher e-health literacy score of 33.45 . This demonstrates the growing importance of e-health literacy in medicine, not least among medical students. One of the findings of this study is that medical students displayed positive attitudes towards e-health literacy. This is consistent with the results of a Saoudi Arabian study conducted in 2020, where 73.4% of students agreed with the relevance of using e-HEALS in the daily life of a medical student . Similarly, a study in Thailand highlighted that (95.45%) of the students surveyed found e-health literacy useful in making health-related decisions . As expected, participants with higher KAP scores were almost twice as likely to have high levels of e-health literacy. Our findings are in agreement with those of an Egyptian study examining the relationship between e-health literacy, antibiotic use, knowledge and awareness of antimicrobial resistance among non-medical university students, which showed that the higher the e-HEALS score, the more rational the use of antibiotics . Similarly, studies on colorectal cancer screening in Japan and the United States found that a high e-HEALS score was associated with better knowledge and screening practices and that individuals with low e-health literacy scores were (44%) less likely to be aware of colorectal cancer screening . In light of these findings, there are a number of recommendations that should be considered to guide future action, namely raising awareness of the need to start teaching healthcare safety before the start of hospital internships, reinforcing education and information about healthcare safety through hands-on training (simulation training), creating an online platform dedicated to information and awareness about BEA, developing mobile applications dedicated to BEA prevention that provide reminders, visual guides and up-to-date information, and offering online BEA prevention and e-health literacy certified courses for healthcare professionals. The current survey seems to be the first study to address the knowledge, attitudes and practices of medical students regarding BEA and its potential link to e-health literacy. In addition, the high response rate of (54.10%) strengthens the reliability and representativeness of our sample. Furthermore, our research follows a rigorous and transparent methodological approach based on reliable and valid instruments, which strengthens its quality and scientific impact. However, our work has some limitations that need to be taken into account. In fact, the current study may be subject to a selection bias because it focused exclusively on undergraduate medical students enrolled at the Sousse Faculty of Medicine. This choice is justified by the practical feasibility of the study; however, it may pose a problem of non-representativeness of the study population to all medical students enrolled in Tunisian medical schools. In addition, the use of a measurement tool based on a self-administered questionnaire to be filled in by the medical students surveyed may pose a possible reporting bias.
E-health-literacy for medical students plays a critical role in the prevention of BEA. A good understanding of standard precautions and preventive measures is essential to minimize the risks of BEA. The current study highlights the importance of providing medical students with adequate training in standard precautions and BEA management, improving access to relevant information, and using digital tools to promote better understanding and practice of safety. These efforts are essential to ensure the safety of medical students and the quality of patient care.
|
Variation in quality of preventive care for well adults in Indigenous community health centres in Australia | 851edee8-ef5f-4e1a-b21d-0a46e903f14e | 3120646 | Preventive Medicine[mh] | As part of the response to high levels of chronic disease among Indigenous Australians, there has been increasing emphasis in recent years on delivery of preventive services in Indigenous primary health care services. This includes development and distribution of evidence-based, Indigenous population specific preventive care guidelines , introduction of Medicare reimbursed biennial health checks for Indigenous adults aged 15 years or over , and the newly released National Preventative Health Strategy which specifies targets and actions for multifaceted preventive care to "close the gap" in life expectancy between Indigenous and other Australians . Previous research provides limited information on delivery of preventive care in Indigenous primary care settings. Studies conducted in the Northern Territory (NT) have documented substantial deficiencies in delivery of preventive care to Indigenous adults in rural and remote communities: on average only 40-50% of preventive services were delivered in line with the best practice guidelines . Studies in Indigenous communities in Queensland (QLD) have not included data on the proportion of community members who received health checks. A national study using Medicare data showed that 3% of Indigenous Australians aged 55 years or over attending GPs had documented use of specific Medicare items for health checks . However, the study appeared to underestimate the uptake of preventive health checks as many Indigenous primary care services do not use Medicare items when delivering services to clients. Thus there is substantial potential to improve the quality of information on the delivery of preventive services to Indigenous people for the purpose of informing implementation of the National Preventative Health Strategy. The Audit and Best-practice for Chronic Disease Extension (ABCDE) project is a national quality improvement initiative which aims to improve quality of care in a range of priority aspects of Indigenous primary health care, including chronic disease care, preventive care, and maternal and child health care . During the past five years over 60 Indigenous community health centres from four States/Territories (NT, Far West New South Wales (NSW), Western Australia (WA) and North QLD) have formally participated in this project. The ABCDE data provide a unique opportunity to improve understanding of delivery of preventive care in Indigenous primary health care settings, and, importantly, to develop and implement strategies for improvement. This paper reports baseline data on delivery of preventive care in Indigenous community health centres participating in ABCDE with a focus on variation in quality of care between services and across different participating regions, and identifies the various factors associated with these variations, both at the health centre level and at the individual level.
Participation by health centres was from five regions where we had established project hub coordinators (Figure and Table ). On a voluntary basis, health centre managers or staff made a request for their centre to join the project after receiving information through invitation letters, word of mouth or meeting presentations. Sixty six (66) health centres formally participated in the ABCDE project. Four (4) of these health centres did not have at least part time access to a GP and were excluded from the analysis for this paper. Baseline audits of preventive care were completed during 2005-2009. Audits covered both paper-based and electronic clinical records. The records of health centre clients who met all of the following criteria were eligible for inclusion: 1) aged between 15 and 54 years; 2) resident in the community for at least 6 of the last 12 months; 3) not having a diagnosis of diabetes, hypertension, ischaemic heart disease, rheumatic heart disease, renal disease or other major chronic illness; and 4) not pregnant or post partum at the time of the audit. Eligibility was verified by checking an up-to-date population list of health centre clients with assistance from health centre staff who knew the community and health centre well. A sample of 30 records, stratified by sex and age groups (15-24; 25-39; and 40-54 years), was selected randomly from records of eligible clients in each health centre. Thus, each random sample comprised 5 males and 5 females in each of the age groups. In communities where there were fewer than 5 people in a sex- and age-specific group, all eligible people in that group were included. The audit measured 16 selected service items (see Table ) which the preventive health care guidelines recommend for delivery every year or every two years for all Indigenous well adults aged 15-54 years . A summary of detailed guideline recommendations in relation to the 16 service items is presented in Table . We adopted a minimum approach to assess whether these services were delivered on a two-yearly basis. A service was assessed as delivered if there was a clear record of delivery of the service at least once within the previous 24 months. The overall adherence to delivery of scheduled services for each adult was calculated by dividing the sum of services delivered by 16 (for females) or 15 (for males - pap smear excluded), and expressing this as a percentage. For example, if there were 6 services assessed as delivered for a male client, the overall adherence to delivery of services for the client was 40% (6/15), interpreted as "40% of guidelines-scheduled preventive services were delivered to the client". Health centre-level adherence was computed as the mean of individual adherence to delivery at each centre. For each individual service item, a percentage (from 0 to 100%) was calculated at each health centre to represent "% of patients who received the specific preventive service". Clinical records were also audited for evidence of abnormal blood pressure readings, positive protein in urine and abnormal blood glucose readings. For any abnormality found we checked for a record of a follow-up as outlined in Table . A percentage was calculated for each health centre to represent "% of adults who had appropriate follow-up of abnormal findings". Statistical analysis The quality of preventive care was measured in terms of adherence to delivery of scheduled services and follow-up of abnormal findings. Treating health centres as the unit of analysis, we compared the quality of care (based on mean percentages) between regions using linear regression models (Tables and ). Centre percentages and mean percentages are unweighted. When treating individual clients as the unit of analysis, our data had inherent multilevel, dependency structure, as preventive care data collected at the individual client level (level 1) were clustered within health centres (level 2). Two-level random effects regression models (linear or logistic) were used to 1) quantify the amount of variation attributable to health centre and individual level characteristics (Table ); and 2) examine associations of specific factors with quality of preventive care (Tables and ), as outlined below: 1) Adherence to delivery of services and follow up of abnormal findings were treated as dependent variables in the random effects models respectively. We constructed a two-level (health centre and client levels) random intercept model with no explanatory variables (also known as an empty model) . In the context of multilevel modelling, the empty model provides an estimate of the basic partition of the variability in the data between the two levels. Based on the model, an intra-class correlation coefficient (rho in Stata ) between two randomly drawn individuals in a given health centre was estimated. The intra-class correlation coefficient can also be interpreted as the fraction of total variability in the dependent variable that is due to health centre level characteristics. The remaining variation is attributable to client level characteristics. The term "characteristics" used here refers to measured and un-measured factors at the health centre and client levels. 2) Using two-level random effects regression models, we also tested associations of specific factors at health centre (location, health service governance, accreditation status and population size) and individual levels (age and sex) with the quality of preventive care. We obtained approval from formally constituted Human Research Ethics Committees (HREC, including Indigenous health research committees where such arrangements were in place) in each region in which the project operated, including the NT Department of Health & Community Services and Menzies School of Health Research HREC, the Central Australian HREC, the Western Australian Aboriginal Health Information and Ethics Committee, the Macquarie and Far West Area Health Services HREC, and the Townsville Health Service District HREC.
The quality of preventive care was measured in terms of adherence to delivery of scheduled services and follow-up of abnormal findings. Treating health centres as the unit of analysis, we compared the quality of care (based on mean percentages) between regions using linear regression models (Tables and ). Centre percentages and mean percentages are unweighted. When treating individual clients as the unit of analysis, our data had inherent multilevel, dependency structure, as preventive care data collected at the individual client level (level 1) were clustered within health centres (level 2). Two-level random effects regression models (linear or logistic) were used to 1) quantify the amount of variation attributable to health centre and individual level characteristics (Table ); and 2) examine associations of specific factors with quality of preventive care (Tables and ), as outlined below: 1) Adherence to delivery of services and follow up of abnormal findings were treated as dependent variables in the random effects models respectively. We constructed a two-level (health centre and client levels) random intercept model with no explanatory variables (also known as an empty model) . In the context of multilevel modelling, the empty model provides an estimate of the basic partition of the variability in the data between the two levels. Based on the model, an intra-class correlation coefficient (rho in Stata ) between two randomly drawn individuals in a given health centre was estimated. The intra-class correlation coefficient can also be interpreted as the fraction of total variability in the dependent variable that is due to health centre level characteristics. The remaining variation is attributable to client level characteristics. The term "characteristics" used here refers to measured and un-measured factors at the health centre and client levels. 2) Using two-level random effects regression models, we also tested associations of specific factors at health centre (location, health service governance, accreditation status and population size) and individual levels (age and sex) with the quality of preventive care. We obtained approval from formally constituted Human Research Ethics Committees (HREC, including Indigenous health research committees where such arrangements were in place) in each region in which the project operated, including the NT Department of Health & Community Services and Menzies School of Health Research HREC, the Central Australian HREC, the Western Australian Aboriginal Health Information and Ethics Committee, the Macquarie and Far West Area Health Services HREC, and the Townsville Health Service District HREC.
Of 62 participating health centres, 47% were managed by a local or regional Aboriginal committee (board), with the remainder government funded/operated (Table ). Sixty nine percent of centres did not have formal general practice accreditation and most (60%) served populations of less than 1000 people. Records of 1839 well adults were audited (Table ). The mean age of these adults was 32 years and 49% were men. Around 90% or more of records from the NT, WA and North QLD centres were for Indigenous people compared to 43% from Far West NSW centres. Twenty eight percent of adults were documented as smokers and 23% had documented alcohol misuse. Ninety two percent of participants had a record of health centre attendance within the previous 24 months, with acute care the main reason for attendance and nurses as predominant health providers. Overall delivery of scheduled services was 34% (Table ), with substantial variation in this measure between health centres (range 5-74%) and moderate variation between regions (range 19-42%). For specific preventive services, variation in delivery was evident across different categories of services, different regions, and different health centres (Table ). Overall, adherence was relatively high for weight and blood pressure measurement and blood glucose testing (50-70%), followed by height measurement, urinalysis, pap smear and STI screening (30-40%), and waist circumference measurement and brief intervention/counselling on lifestyle modification (20-30%). Less attention was paid to oral health checks and brief intervention or counselling regarding emotional well being (15-18%). However, the range between health centres for delivery of almost all of these services was from 0- > 80%. Analyses of preventive service delivery between Indigenous and non-Indigenous adults in Far West NSW health centres showed no statistical difference in overall service delivery between the two groups. However, Indigenous adults were more likely to receive services related to BMI and waist circumference measurements, blood glucose testing and emotional wellbeing counselling. On average, health centre-level documentation of an abnormal blood pressure reading (≥ 140/90 mmHg) was found in 15% of adults, proteinuria in 20%, and abnormal blood glucose (≥ 5.5 mmol/L) in 37% (Table ). However, the range between health centres for these measures were 0-100%, 0-92% and 0-94% respectively. North Queensland health centres had higher rates of abnormal blood pressure, proteinuria and abnormal blood glucose compared with other regions (P < 0.05 for comparison with NT Top End). Of those with identified abnormal clinical findings, overall about 20-30% had a documented follow up check/test or management plan, but the range between services was 0-100%. Client level characteristics accounted for a large proportion of the variation in delivery of services and in follow up of abnormal findings: 69% for overall adherence to delivery of scheduled services (with a range of 53-79% for specific services); and between 62-87% for follow up of abnormal findings (Table ). Age and sex were both independently associated with overall delivery of services, with higher rates of delivery in the 25-39 year age group and in women (Table ). Health centre level factors which were independently associated with higher level of delivery of services were location (remote community vs city), community population size (≤ 500 vs ≥ 1000), region (Top End vs FW NSW) and governance (Indigenous committee/board operated vs government). For follow-up of abnormal findings, North Queensland had higher rates of follow-up of abnormal BP (Table ) . No other health centre or individual characteristics showed significant associations with follow-up of abnormal results.
There is substantial room to improve the quality of preventive care to Indigenous adults in many locations across Australia - in terms of overall delivery of services, in delivery of a range of specific services and in follow up of abnormal findings from routine health checks. Overall, it appears that about one third of the recommended preventive services were delivered to clients in participating health centres. Variation in overall delivery of guideline scheduled services between health centres is striking, with the lowest adherence to delivery being 5% and the highest being 74%. For specific important measures such as BP screening, overall 71% of adults have a record within the previous two years. However the variation between centres of 23-100% reveals a critical requirement for action in some health centres. The generally small proportion of clients with records of or plans for follow-up of abnormal clinical findings among these 'well' adults also highlights an important area for improvement. Limitations of this study include: 1) Health centres were not randomly selected and their participation in the project was on a voluntary basis and enrolment was staggered over a period of some years. Therefore, these data are not representative for the regions involved and differences between health centres may be partly the result of introduction of new policies over time. A longitudinal analysis including services with more than three years of data through participating in this project will be reported separately. 2) We relied on clinical medical records to retrieve preventive care data, which may underestimate actual service delivery if delivered services are not recorded in clinical records. While failure to document services may mean that services are delivered at higher levels than reflected in our data, the failure to document delivered services is itself a significant barrier to continuity and coordination of care and in preventing duplication and over servicing - especially in areas of high workforce turnover. Failure to document delivered services is therefore in itself a deficiency in quality of care. 3) The unweighted age and sex stratified random samples are designed to facilitate analysis of quality of care between communities. Estimates based on this sampling approach may differ from sampling approaches designed to provide population estimates. The pattern of delivery of different services (with blood pressure checks and blood glucose testing for well adults being relatively high, followed by delivery of urinalysis, pap smear and STI screening, provision of brief interventions/counselling related to lifestyle change, and with lowest levels of delivery for oral health checks and counselling on emotional well being) to some extent reflects a gradient in strength of evidence for the preventive services specified in best practice guidelines . Practitioners appear less likely to provide some services where the availability of referral services (e.g. for dental and mental health care ) is limited. However, the low proportion of adults identified as smokers in relation to known smoking rates in these communities is an example of an important gap in documentation of major risk factors where there are relatively simple primary care interventions with a reasonably well established evidence base. The comparability of these findings with similar studies is limited by the inclusion in these studies of the general adult population, while our study focuses on an age and sex stratified (unweighted) random samples of well adults. People with chronic illness are likely to have increased contact with the health systems and more opportunities for receiving preventive services, so delivery of preventive services to well adults might be expected to be lower than for people with chronic illness. As indicated above, the stratified random sample used in our study may also not be representative of the study populations in each community or of the study populations for all communities combined. Bearing these restrictions on comparability with other studies in mind, we note that delivery of some services (such as BP) in our study population compares reasonably well. However, the generally low levels of delivery of care, the well known burden of chronic disease in this population, the importance of early detection and treatment, and the high rates of attendance by our study population at primary care centres mean that many important opportunities are being missed and there is clearly a need for better delivery of preventive services. High prevalence of health problems among "healthy adults" and low follow-up of identified problems are of significant concern. Similar to previous reports from Indigenous primary care settings , about 20%-40% of the participating "healthy adults" in our study had abnormal blood pressure, abnormal blood sugar levels, or proteinuria. This highlights the importance and necessity of systematically implementing preventive care for adults in Indigenous communities for early detection and management of preventable chronic disease. A parallel priority in preventive care is to effectively follow up and manage the abnormal conditions identified. Failure to follow up and implement management plans means resources and efforts invested in regular checking and screening of well adults are wasted and cannot be translated into improved health outcomes. Our analysis of variation in preventive care indicates health centre level and individual client level factors have a similar level of influence on delivery of preventive care. The finding that accreditation of services is not clearly associated with quality of care is consistent with other research on this topic , and indicates the need for a more active approach to quality improvement (e.g. routine use of clinical data to monitor and improve quality of care; ongoing engagement of health centre staff in service planning, system redesign and implementation of improvement initiatives). The finding that delivery of preventive services is better in remote locations and worse in health centres with large service populations is likely to be at least partly due to greater use of a number of different providers by clients living in non-remote settings or larger centres. The finding that delivery of preventive services is better in community controlled services than government managed services supports the contention that community control (through its philosophy, organisation or funding) facilitates quality and access to care . The substantial proportion of variation in preventive care attributable to client level factors points to the importance of health centre systems to deliver care in a way that most effectively meets the varying needs of individual clients. Regarding client level factors, we only collected demographic information of participants (i.e. age, sex and Indigenous status). Male participants appeared less likely to access preventive services than females in our study. This may reflect the perceptions of many Indigenous men who consider health centres as "women's places" , as health centres in remote communities are predominantly staffed by females. Gender appropriate workforce and infrastructure may encourage Indigenous men to better use of health services. Other client level factors, such as their health literacy , perceptions of physical, social and cultural accessibility of the centre, and factors which influence a client's relationship with health centre staff , are important influences on the delivery or uptake of preventive care. These questions need to be investigated in future research, as well as associations of specific health centre system factors with preventive care. Beyond addressing potential health centre and client level factors, a supportive health policy has been recognised as critical to the implementation of preventive care to populations . The introduction of a new Medicare item (item 710) in 2004 for health assessment of Indigenous people aged 15-54 years has been welcomed as an example of innovative policy in Indigenous health . However, the impact of this measure is unclear and the longitudinal analysis of the ABCDE data should provide some evidence in this area. Previous research indicates that the Medicare rebates for providing preventive care may have less effect in motivating practitioners working in remote Indigenous community health centres who are usually in salaried positions . More recent policy developments include new legislation which authorises practice nurses and Aboriginal Health Workers to access some Medicare items (e.g. for provision of immunisation and follow up services for Indigenous people after health assessment) , and introduction of the Indigenous Practice Incentives Program (PIP) to encourage population-based care . Further refinement of health policies includes strengthening direct financial and workforce support to health centres based on needs of defined populations .
There is great potential to improve delivery of preventive services to well adults in Indigenous primary care settings. Particular attention should be given to improving follow-up of abnormal clinical findings identified by preventive health assessments. The national collaborative approach that underpins the data presented in this paper provides a significant opportunity to advance understanding of variation in care and to develop and examine the effect of innovative strategies to enhance the quality of care for Indigenous Australians.
The authors declare that they have no competing interests.
RB played a lead role in conceptualisation of study design, development of measurement tools, project management, and revising of the manuscript. DS played a major role in reviewing the literature and conceptualisation, conducted data analysis, and drafted the manuscript. CC, AB, TW and ST and HB contributed to study design and facilitated engagement of health services. MD and LO contributed to study design and development of measurement tools. MD, LO, RK, CK, RC, HL, JH carried out field work and conducted data collection. All authors contributed to the interpretation of findings, read and approved the final manuscript.
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6963/11/139/prepub
|
A bidirectional reversible and multilevel location privacy protection method based on attribute encryption | 10c77696-7049-4443-8e6f-acefde0d7e1d | 11379367 | Psychiatry[mh] | Various applications and services utilizing mobile devices have emerged alongside the continuous advancement of mobile networks. Location-based services (LBS) have gained particularly significant traction. Although LBS offer convenience, they also tend to result in users inadvertently leaving extensive location and trajectory data on service platforms. This data, if analyzed and combined with additional background knowledge by malicious third parties, can severely compromise users’ privacy. For instance, attackers might deduce a user’s health conditions by analyzing queries made near medical facilities. Researchers have developed several methods to mitigate the privacy risks associated with LBS, including location confusion, trajectory offset, dummy information, and k-anonymity. However, these methods generally adhere to a rigid “all-or-nothing” privacy standard, offering either complete and uniform privacy protection or none at all. This fails to meet users’ demands for personalized and multi-level privacy protection. Data cannot be restored to its original state after it has been anonymized, rendering it less useful for a variety of user requirements. Existing privacy protection methods are predominantly single-layered and coarse-grained, providing a uniform level of privacy that fails to support personalized or multi-level protection. Moreover, these methods are typically unidirectional and irreversible. Dummy data cannot be removed once it is integrated into the dataset, leading to a permanent reduction in data quality and decreased utilization efficiency. While some methods do offer reversible privacy protection, their excessively complex data encryption processes and anonymity algorithms can significantly impede data processing. To address these issues, this paper proposes a bidirectional, multi-layer reversible location privacy protection method based on attribute encryption. This method provides layered, bidirectional, and fine-grained privacy safeguards. Multi-level privacy protection for location data is achieved through a hierarchical privacy scheme that incorporates varying levels of dummy information. It utilizes ciphertexts of dummy information identifiers to control the degree of de-anonymization based on users’ individual trust levels, enabling reversible transformations between data anonymization and de-anonymization. Furthermore, the method uses an attribute-based encryption access control system to manage resources and streamline key generation and distribution, further enhancing the granularity of privacy protection. The proposed method is applicable three distinct scenarios: when users with varying trust levels access the same data resources, when user identity must remain unknown for granting permissions, or when anonymized data needs to be restored. Thus, users with different trust levels can obtain data with varying degrees of precision from the same anonymized dataset. Even without prior knowledge of a user’s identity, access control authorization is necessary. Importantly, this ensures that data anonymization does not lead to a permanent degradation of data quality, allowing for the restoration of anonymous data to its original state. The main contributions of this work can be summarized as follows: Firstly, a multi-level location privacy protection method is proposed that addresses the limitations of “all-or-nothing” privacy standards. It includes multiple privacy levels, incorporating varying levels of dummy information and generating a series of dummy information identifiers for multi-level protection of private location data. This method enhances privacy protection effectiveness by constructing a position adjacency table and selecting random location points via a hash function. Dummy information identifiers are encrypted using an access structure tree, which controls the degree of de-anonymization based on users’ trust levels, thus balancing privacy protection with data utilization efficiency. Secondly, a novel bidirectional method is introduced that resolves the issue of irreversible data loss. An access policy is defined using an attribute encryption access control mechanism, incorporating an access structure tree, where user attributes are employed as encryption parameters. A trusted third party authenticates user attributes and generates decryption keys for the ciphertext of the identifier files, enabling privileged users to perform de-anonymization operations. This allows for reversible transformations between data anonymization and de-anonymization, streamlines resource control, and reduces complexities associated with key generation and distribution, thus achieving fine-grained privacy protection. Thirdly, experiments conducted on real datasets confirm the feasibility and effectiveness of the proposed method. By comparison against existing methods, it is shown to offer more efficiently safeguard user location and trajectory data while ensuring bidirectional, multi-level, and multi-granular privacy protection. The rest of paper is as follows: section 2 introduces the related work, section 3 systematically introduces the research contents and methods, and the safety is analyzed in section 4. The experiment and result analysis are shown in section 5. The section 6 summarizes the research contents of the paper and puts forward the next research direction.
LBS provide remarkable convenience in users’ daily lives but also pose significant risks due to the potential leakage of private locations and movement trajectories. Researchers have developed various privacy protection methods to address these concerns, such as confusion techniques, location offsets, and dummy information. Among these, the k-anonymity method is known to effectively balance data availability and privacy security, while the differential privacy protection method is noted for its strict data model; both have become important topics of research in this field. Other methods, like query semantic analysis, have been found to undermine anonymity. For instance, Yang et al. proposed a dual privacy protection scheme based on a multi-anonymous architecture. This method encrypts queries via the Shamir mechanism and enhances privacy by replacing sensitive semantic locations with anonymous sets that reflect user diversity. However, the encryption and decryption of query content can create excessive response times and diminish quality of service. Wang et al. proposed an L-clustering algorithm based on differential privacy protection, which clusters users’ locations based on duration of stay, frequency, and sensitivity while incorporating Laplacian noise for privacy protection. However, this method’s consumption of privacy budget parameters is burdensome. Xing et al. developed a distributed k-anonymous location scheme that forms anonymized groups based on users’ interests and social behaviors, reducing the risk of attacks leveraging background knowledge. These methods often rely on a central trusted server for data anonymization, however, which raises concerns about potential data breaches. This underscores the demand for more innovative, distributed privacy protection approaches. The decentralization aspect of blockchain technology offers novel solutions for privacy protection. Zhang et al. suggested a method based on the (t, n) threshold scheme and smart contracts, encrypting and distributing user queries via a private blockchain and the Shamir algorithm to prevent collusion attacks. This method also incentivizes timely submission of anonymous queries through smart contracts. Although this approach integrates blockchain technology, it falls short of achieving a fully decentralized LBS. In addressing the risk of privacy breaches by untrusted collaborators and the leakage of semantic location information, Yang et al. introduced a mechanism that combines blockchain with a user-related semantic location model. It leverages public chains for issuing privacy requests and private chains for selecting anonymous locations, using smart contracts to enhance the security of the collected semantic information. However, this method lack clarity in the implementation of private chains and smart contracts, which may hinder its practical application. Additionally, Zhu et al. proposed a blockchain-based scheme for privacy-preserving location-sharing, in which precise locations are converted into broader areas; sharing details are varied based on the trust level of the requester with a Merkle tree for data segmentation. Shen et al. proposed combining blockchain and machine learning technologies to securely store transaction and trust data, thereby protecting against malicious tampering and addressing other significant privacy concerns associated with the Internet of Vehicles. Despite their utility in some regards, the prevailing privacy protection methods generally adhere to a rigid “all-or-nothing” standard whereby they either provide complete and uniform privacy protection or none at all. This fails to address users’ needs for personalized and multi-level privacy options. Moreover, these systems do not allow user location data to be reverted to its original form once it has been anonymized, leading to irreversible loss of data quality and negatively impacting data utilization efficiency. Li et al. [ – ] proposed a reversible location anonymity method designed to restore location data for mobile device users. Their method employs a spatiotemporal anonymity model to reversibly alter location data, achieving high spatial resolution and commendable success rates. However, the complexity of the data encryption process and the choice of anonymity algorithms compromise data processing efficiency. The method continuously reconstructs links from previously selected ones during the anonymization process, adjusting selections based on current conditions and thus, unfortunately, creating excessive temporal complexity. On the one hand, it requires lengthy anonymization runtimes, while constructing conflict-free links in real-time. And on the other, it demands significant memory space foe storing conflict-free links, similarly failing to meet users’ requirements for real-time and efficient privacy protection. Buccafurri et al. introduced a hierarchical location-based trusted service scheme based on the edge cloud paradigm, which distributes user information among hierarchical regions managed by different autonomous organizations. Lower-level services manage exact location data, whereas higher-level services manage only aggregated data, which addresses the potential privacy leaks caused by centralized service failures. Though these methods secure user location data in LBS, they do not offer multi-granularity privacy protection tailored to actual user needs nor do they support reversible or fine-grained safeguarding. Moreover, after anonymizing the data, it cannot be restored to the original state, which will seriously affect the efficiency of data. Other relevant privacy protection methods are summarized in . As discussed above, the irreversible data anonymization process severely impairs data efficiency. The current research primarily centers on anonymization, neglecting the potential for data de-anonymization, though it is crucial in practical analysis applications where de-anonymization is crucial to fully harness the value of such data. To address these shortcomings, this paper proposes a bidirectional, multi-layer reversible location privacy protection method based on attribute encryption. This method not only supports bidirectional operations but also offers multi-layered and fine-grained, personalized privacy safeguards, catering to diverse user demands and facilitating data reversibility in multi-user and multi-demand scenarios. Important distinctions between the proposed method and existing methods are twofold: firstly, it facilitates bidirectional, reversible processing of private location data by incorporating dummy information at varying strengths across different levels. This not only provides anonymized privacy protection but also allows for the refinement of de-anonymized data. It establishes multi-level privacy protection tailored to users’ needs, enabling those with different permissions to access data at different levels of anonymity and precision. Secondly, the proposed method enhances resource control through an attribute encryption access control system. This system manages the encryption of de-anonymized identifier files and the generation and distribution of attribute keys, achieving reversible and robust privacy protection effects.
To achieve bidirectional, reversible, and multi-level privacy protection for mobile users’ location and trajectory data, the proposed method integrates a variety of techniques including privacy protection, data encryption, access control, and attribute encryption. The data owner first establishes privacy protection levels and incrementally adds dummy information, generating corresponding identifier files for each level. These files catalog all dummy information incorporated at that particular level. The data owner crafts an access policy for each identifier, creates an attribute access structure tree, and uses this tree as a parameter to encrypt the identifier files, producing identifier-file ciphertexts. The data owner then transmits the final anonymized dataset along with these ciphertexts to the data service center and sends the access structure tree to a trusted third party. A privileged user, whose attributes satisfy the access criteria attached to the access structure tree, can request a decryption key from the trusted third party. Upon obtaining this key, the user is able to decrypt the relevant dummy information identifier file, carry out the de-anonymization process, and access a more precise dataset at the desired level of privacy protection. 3.1 Workflow In the paper, we propose a method to achieve hierarchical privacy protection by adding dummy information layer by layer, generate and manage keys by using access control technology based on attribute encryption, and de-anonymize anonymous data sets by identifying files with dummy information, which can achieve bidirectional, reversible, multilevel and fine-grained protection of user location privacy data. The whole process includes the following seven steps: adding dummy information, generating dummy information identification files, setting access control policies, publishing anonymous data sets, encrypting identification files, generating attribute keys, and accessing data by users. The specific workflow is shown in . Adding dummy information. According to privacy protection requirements, the anonymity data is divided into N different levels, it is represented as L 0 ,L 1 ,L 2 ,L 3 ,…,L N-1 , which satisfies the condition L 0 <L 1 <L 2 <L 3 <…<L N-1 , where L 0 only contains real location information, L N-1 is the highest degree of anonymity, i.e., the final published anonymous data set. Greedy algorithm is used to add dummy location information, and randomly adds adjacent or connected locations to the current anonymous set. Generating identification files. According to the set anonymity level, while adding corresponding amount of dummy information hierarchically, the dummy information identification file is established hierarchically. The identification file B i and the anonymity level L i are one-to-one corresponding, and the lower anonymity level is, the more dummy location information is marked in the identification file. Then the more dummy information can be removed by de-anonymization, the more accurate data will be. The relationship between identification files is: B 1 ⊇B 2 ⊇B 3 ⊇…⊇B N-1 , and the number of dummy data in each level is |B i |. At each anonymity level, only dummy information is added and identification files are generated, and anonymous data is not published. Finally, a unified anonymous data set is generated, and N-1 identification files are generated. Setting access control policies. The data owner formulates the access policy of the dummy identification file and establishes the access structure tree. Only the user who meets the attribute set can obtain the decryption key and decrypt the ciphertext of identification file. The data owner may not know the identity of the potential authorized user and the associated key set in advance, and it can even keep the anonymity of authorized users in certain scenarios. Publishing anonymous data set. When the de-anonymization key is generated, it only encrypts the corresponding identification file, but does not encrypt the anonymous data set, so as to improve the efficiency of data processing. Data owner generates a unified anonymous dataset and publishes it along with a series of identifying file ciphertexts. All data users and attackers face the same anonymous data set, and privileged users can remove the corresponding level of dummy information by de-anonymizing the corresponding key. Encrypting identification files. Based on the access control structure tree, the data owner encrypts the identification file to generate a series of identification file ciphertext. Only user who meets the access attribute conditions can obtain the decryption key and decrypt the identification file ciphertext at the level. Generating attribute keys. According to the access control policy and the attribute certificate of the applicant user, the trusted third party generates key for decrypting ciphertext of identification files at different levels, and then sends the key to the corresponding user. User access data. For an ordinary user, because there is no key to decrypt the identification file ciphertext, he can only use unified data with high anonymity, so as to protect the privacy of users. For a privileged user, he can obtain the decryption key of the identification file ciphertext from the trusted third party, obtain relatively accurate location data to improve the efficiency of data. 3.2 Constructing undirected graph and adjacency table Before the anonymization process, all road segments in the map are pre-assigned their connections in a conflict-free manner, and then it selects dummy information according to the pre-assigned connections, that is, all the road segments in the map will be pre-processed in a conflict-free manner to establish a road segment undirected graph. Then, based on the undirected graph, a corresponding adjacency table is generated, the header of the adjacency table is the connection point of the road segment, and the nodes in the adjacency table are all other connection points directly connected to the connection point. The adjacency table consists of vertex nodes and table nodes, the structure of vertex node is D = (ID, Name, FirstName, Note), where ID represents the segment number, Name represents the segment name, FirstName represents the name of first node directly connected to the vertex, and Note is comment information which is used to record whether the node is added to the anonymous collection. The structure of table node is E = (AdjuvexID, Info, NextName, Notes), where AdjuvexID represents the ID of node directly connected to the vertex, Info represents the relevant information field which is used to store information such as weight, and NextName is the segment pointing to the next directly connected node. When a dummy location point is added, a pseudo-random number is generated by a random function, and then a location point is selected by a hash function: H i ( K e y ) = K e y i m o d | A | (1) where Key i is the pseudo-random number generated, |A| is the number of nodes in the adjacency table, H i (Key) is used to select dummy node in the adjacency list. When it selectes the corresponding connection point in the adjacency list, it first checks the identification bit of the node, if the identification bit is 0, the node is selected. If the identifier bit is 1, it means that the node has been added to the anonymous set, and there is a conflict between the selected nodes, then the open addressing method is used to resolve the conflict and reselect a new node, that is, H i ( K e y ) = ( H ( K e y ) + d i ) m o d | A | (2) where i = 1,2,…,s, H(Key) is a hash function, d i is an incremental sequence, and the value selection method adopts a linear detection hashing method, d i = c×i, c = 1. It reselects nodes by resolving conflicts, if the conflict is not resolved, it continues to select a new node by using the conflict hash function until a node-free location is selected. After selecting the location node, it checks whether the selected connection point is directly connected to any node in the current anonymous set through the adjacency table, that is, it verifies whether the selected node is on the same adjacency table as any node in the anonymous set. If it is directly connected, the location point is added to the anonymous set, and its corresponding identification bit is marked as 1. Otherwise, there is a conflict, and it must generate new pseudo-random numbers, and then select and check the connectivity of new nodes. For example, in represents a city map, represents the constructed undirected graph, and the constructed position adjacency table is shown in . Therefore, for a new map, it first generates a location adjacency table corresponding to an undirected graph with algorithm 1, and then anonymizes it. Algorithm 1: Building a Location Adjacency Graph Input: Undirected Graph of links G = (N, E) Output: Position Adjacency List PL 1 int i,n, e; 2 int u,v; 3 ENode **a,*t; 4 cin>>n>>e; 5 a = new ENode*[n]; 6 for (i = 0;i<n;i++) 7 a[i] = NULL; 8 for (i = 0;i<e;i++) { 9 cin>>u>>v; 10 t = new ENode; 11 t->adjVex = v; 12 t->nextArc = a[u]; 13 a[u] = t; 14 t = new ENode; 15 t->adjVex = u; 16 t->nextArc = a[v]; 17 a[v] = t;} 18 for (i = 0;i<n;i++) { 19 cout<<“a[“<<i<<”]”; 20 t = a[i]; 21 while (t! = NULL) { 22 cout<<“->”<<t->adjVex; 23 t = t->nextArc;} 24 end;} 25 return PL 3.3 Building anonymous datasets with multiple levels Based on the location adjacency table, to construct anonymous data sets. During anonymous processing, each anonymous request contains a profile identifying the user’s privacy protection requirements, which contains relevant parameters for privacy protection, denoted as (L, k , t , d ), where L represents the number of levels of anonymity protection, i.e. anonymity processing is divided into several protection levels. k represents the anonymity parameter, which specifies the number of other users contained in the current level of anonymity. t denotes a time threshold that specifies the maximum tolerated time for anonymous processing, d denotes the spatial threshold which specifies the range of maximum acceptable anonymous space. In the multilevel location privacy protection model, according to the privacy protection requirements and anonymity level, the anonymity level is divided into N levels, specifically expressed as L 0 ,L 1 ,L 2 ,L 3 ,…,L N-1 , and the anonymity parameter corresponding to the anonymity level L i is: k i = k 1 + ( i ‐ 1 ) k 1 = i • k 1 (3) Where 1≤i≤N, L 0 only contains real location information, and k 1 is the anonymous parameter corresponding to L 1 . The anonymity degree satisfies L 0 <L 1 <L 2 <L 3 <…<L N-1 , L N-1 is the most anonymous dataset, i.e. the final published anonymous dataset. Anonymity processing starts at L 1 , and then it constructs anonymous data sets based on anonymity-level configuration parameters ( k 1 , t 1 , d 1 ). Dummy location information is added to the data set which contains only the real user set L 0 . Firstly, k 1 location points which are selected directly connected with position locations in L 0 in the current adjacency table to construct anonymous data set M 1 at L 1 level. When dummy locations are added, a greedy algorithm is adopted to generate random numbers in a certain range by random function and hash function, and dummy locations are selected in the adjacency table by random numbers until the privacy configuration parameters are satisfied. If the anonymity condition of the user is still not satisfied with the set spatial threshold d , the anonymity is failed. Then, on the basis of L 1 , dummy information is continuously added to the anonymous set of M 1 until the configuration parameters ( k 2 , t 2 , d 2 ) of the anonymity level of L 2 are satisfied. At this time, the anonymous set constructed is M 2 , where M 1 ⊑M 2 . It is repeated until an L N-1 anonymous dataset M N-1 is constructed, where M 1 ⊑M 2 ⊑M 3 ⊑…⊑M N-1 . Finally, the anonymous dataset M N-1 is published. The anonymous set construction process is shown in Algorithm 2. Algorithm 2: Building an anonymous set Input: Privacy Configuration Parameters ( k , t , d ), Anonymity Level L i , Adjacency List PL Output: Anonymous set M, Identify File Set B 1 ∃ L i , i∈(0,N-1) 2 if i = 0 3 M i = {s 0 } // s 0 is the location of the real user 4 B i = ø 5 else 6 for (i = 1; i++; i<N) { 7 k i = i•k 8 H j (Key) = Key j mod |PL| // Generate random numbers to select location points 9 ∃ s j ∈PL, ∀s n ∈M i 10 if s j and s n are directly connected, then M i = M i ∪ {s j } 11 B i = B i ∪ {s j } 12 count = |M i | 13 if count<k i 14 H t (Key) = Key t mod |PL| 15 if s t in PL and s n in M i are directly connected then M i = M i ∪ {s t } 16 B i = B i ∪ {s j } 17 count = |M i | 18 end if 19 B = B∪{B i } 20} end for 21 return M N-1 , B Greedy algorithm is used to add dummy location information, which randomly adds adjacent or connected locations to the current anonymous set, and each dummy location is added which connects with at least one location in the current anonymous area. In the paper, the proposed method can effectively prevent the location information without connection from being added by hash function and adjacency table, to improve the effect of privacy protection. Because it uses a random function to select locations, all locations are selected with the same probability, so an attacker or data consumer cannot determine the exact location of the real user. Example 1 . In of the above example, assuming that the road segment s 7 contains the real location of the user, the set initially formed that only contains the real location is M 0 = {s 7 }, and it can be used as the anonymity level L 0 . If the initial privacy protection parameter is k = 3 and k is incremented for each level thereafter, the anonymity parameter of anonymity level L i is k i = k +(i-1) k = i•k, i.e. k i = 3+(i-1)•3 = 3i. For anonymity level L 1 , k 1 = i• k = 1•3 = 3, it selects a location point H in the location adjacency table which is directly connected to location point G in s 7 by a random function and a hash function, and the identification bit of the corresponding location point is set to 1. Then the corresponding segment s 8 is added to the anonymous set, and the anonymous set is M 1 = {s 7 , s 8 }. At this time |M 1 |< k 1 , it continues to add dummy location information. It selects the location point J which is directly connected to the location point G or H in s 7 in the location adjacency table, and the identification bit of the corresponding location point is set to 1. Then the corresponding segment s 9 is added to the anonymous set M 1 = {s 7 , s 8 , s 9 }, at this time |M 1 | = k 1 , the anonymity processing is completed, and the corresponding anonymity level is L 1 . For anonymity level L 2 , k 2 = i•k = 2•3 = 6, it selects location point A which is directly connected with location point in {s 7 , s 8 , s 9 } in location adjacency table, and the identification bit of its corresponding location point is set to 1. The corresponding segment s 4 is added to the anonymous set, the anonymous set is M 2 = {s 7 , s 8 , s 9 , s 4 }, where |M 2 |<k 2 , it continues to add dummy location information. Then, through random function and hash function, it selects position points F and I which are directly connected with location points in {s 7 , s 8 , s 9 , s 4 }, and it sets the identification bit of the corresponding location point to 1. The corresponding segments s 5 and s 11 are added to the anonymous set, the anonymous set is M 2 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 }, where |M 2 | = k 2 , the anonymity processing of this anonymity level is completed, and the corresponding anonymity level is L 2 . For anonymity level L 3 , k 3 = i•k = 3•3 = 9, it uses the same method to construct the anonymous set M 3 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }. When |M 3 | = k 3 , the anonymity processing is completed, and the corresponding anonymity level is L 3 . Finally, four levels of anonymous sets are formed. For L 0 , only the real user location is contained. From L 1 to L 3 , the degree of anonymity is gradually enhanced, and more and more dummy information is added to the anonymous set. Specifically expressed as: L 0 : M 0 = {s 7 } L 1 : M 1 = {s 7 , s 8 , s 9 } L 2 : M 2 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 } L 3 : M 3 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } 3.4 The de-anonymized dummy information identifiers file is created The dummy information identification file is mainly used to identify the added dummy information at each anonymity level hierarchically, so that the corresponding dummy information can be accurately removed in the de-anonymization stage to obtain accurate data with different precisions. Therefore, it can ensure the reversibility of anonymous data and the bidirectional nature of anonymity process. The identification file corresponds to the anonymity level, i.e. each anonymity level corresponds to an identification file. The structure of the identification file is B = (ID,L,FID), where ID represents the serial number, L represents the degree of anonymity, and FID represents the identification of added dummy information. According to the anonymity level L 1 <L 2 <L 3 <…<L N-1 , while adding corresponding amount of dummy information hierarchically, the identification file of the dummy information is hierarchically established, and the identification file B i is corresponded to the anonymity level L i one by one. The lower anonymity level, the more dummy location information is marked in the identification file. The more dummy information is removed by deanonymization, the more accurate the data is obtained. The relationship between identification files is B 1 ⊇B 2 ⊇B 3 ⊇…⊇B N-1 , and the number of dummy data in each level is |B i |. Each anonymity only adds dummy information and generates identification files, without publishing anonymous data. Finally, a unified anonymous data set is generated, and N-1 identification files are generated. The identification file only marks the hierarchical level of privacy protection and the ID number of added dummy information. In order to improve the efficiency of encryption and decryption, only the identification file is encrypted. Then it is sent to the data server without encrypting the data, which is anonymized and distributed directly to all users. Example 2 . In the above example, an anonymous set with four anonymity degrees L 0 , L 1 , L 2 , and L 3 is constructed, and the final anonymous set is M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }. Where L 0 only contains real users, and the identification file sets corresponding to the other three levels are: B 1 = {s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }、B 2 = {s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }、B 3 = {s 2 , s 3 , s 10 }. Each identification file marks the added dummy location information to the final anonymous collection. The higher anonymity of the identification file, the less dummy location information is marked. For example, L 3 is more anonymous than L 2 , then the dummy information labeled is added in B 3 which is in the basis of L 2 , and the number of dummy information labeled is significantly less than B 2 . 3.5 Set an access strategy In the paper, we propose a method to encrypt data by using attribute set as encryption parameters, and only user who meets the attribute set can obtain private key and decrypt data. Definition 1 Attribute. It assumes that A = { A 1 , A 2 , … , A n } is the set of all attributes, then each attribute S is a non-empty subset of A, S ∈ { A 1 , A 2 , … , A N } , then n attributes can identify 2 n users. Definition 2 Access Structure. It assumes that {P 1 , P 2 ,…, P n } is a set of participants, let S ⊆ 2 { P 1 , P 2 , … , P n } , if ∀ B,C, then B∈S, and B ⊆ C , then C∈S, S is said to be monotonic. S is called an access structure if S is monotonic and nonempty, S ⊆ 2 { P 1 , P 2 , … , P n ) { ∅ } , and the elements of S are called authorized sets. Access structure is mainly divided into threshold structure, attribute value and operation structure, access tree structure and LSSS matrix structure. At present, access tree structure is widely used in access control, which can be regarded as an extension of single-layer (t,n) threshold structure and supports AND, OR and (t,n) threshold operations. The (t,n) threshold means that the secret information is divided into n parts, and at least t parts must be obtained to reconstruct the secret information. The AND operation can be regarded as an (n,n) threshold, and the OR operation can be regarded as an (1,n) threshold. Definition 3 Access Tree. T is an access tree, each node is denoted by x in the tree, the number of child nodes of this node is denoted by n x , and its corresponding threshold value is denoted by k x . Each leaf node represents an attribute, and threshold values k x = 1, n x = 0. The relation between threshold value of non-leaf node and number of child node can be expressed by AND, OR and (t,n) threshold relation of attribute represented by leaf node, that is, k x = n x represents AND operation, k x = 1 represents OR operation, 0<k x < n x represents (t,n) threshold. Access tree is shown in . Example 3 . In the above example, the identity files B 1 , B 2 and B 3 are constructed according to the privacy protection layer level, the access trees corresponding to the access policies are set as T 1 , T 2 and T 3 respectively, as shown in . For the identification file ciphertext B 1 , the access tree is T 1 , and the access user must meet three conditions at the same time, that is, he belongs to company A, his position is M and his level is senior. The user who meets the access structure can use the identification file B 1 to remove dummy information. For example, Jim is an employee of Company A with position M and level senior, which meets the access structure. Tom is an employee of Company A with position M and level intermediate, which does not meet the access structure. For the identification file ciphertext B 2 , the access tree is T 2 , and the access user must meet two conditions at the same time, that is, he belongs to company A and his position is M. The user who meets the access structure can use the identification file B 2 to remove dummy information. For example, Jack is an employee of Company A with position M, which meets the access structure. Alice is an employee of Company A with position N, which does not meet the access structure. For the identification file ciphertext B 3 , the access tree is T 3 , and the access user must meet the conditions, that is, he belongs to company A, or he belongs to company B and his position is I. The user who meets the access structure can use the identification file B 3 to remove dummy information. For example, John is an employee of Company A, which meets the access structure. Martin is an employee of Company B with position I, which meets the access structure. Smith is an employee of Company B with position S, which does not meet the access structure. 3.6 Encrypt identification files and generate attribute keys The data owner encrypts the identification file B i to the ciphertext CB i with the public key PK and the access structure tree T i . Then he sends the master key MK and access tree T i to the trusted third party (TTP), while he sends the anonymous data set and identification file ciphertext to the data service provider (DSP). Data and attribute keys are stored separately in different servers to prevent privacy information from being leaked. When a privileged user u i wants to use more accurate anonymous data, he sends his Certificate Attribute (CA) to TTP and requests the decryption key SK to identification file ciphertext. According to the access structure tree T i and the attribute certificate CA i , the TTP generates a decryption private key SK i of the identification file ciphertext CB i and sends it to u i . Then u i uses Sk i to decrypt the identification file ciphertext CB i to obtain the dummy information identification B i , and he can remove the dummy information from the anonymous data set to obtain relatively accurate data. Example 4 . In the above example, Jim is an employee of Company A, his position is M, and his level is high, his attributes meet the access structure tree T 1 , so he can obtain the decryption private key SK 1 of the identification file ciphertext CB 1 . Tom is an employee of Company A, his position is M, and his level is intermediate, his attributes do not meet any access structure, then he cannot obtain the decryption private key. Jack is an employee of Company A, his position is M, his attributes meet the access structure tree T 2 , so he can obtain the decryption private key SK 2 of the identification file ciphertext CB 2 . Alice is an employee of Company A, her position is N, her attribute does not meet any access structure, then she cannot obtain the decryption private key. John is an employee of Company A, Martin is an employee of Company B with position I, their attributes both meet the access structure tree T 3 , so they can obtain the decryption private key SK 3 of the identification file ciphertext CB 3 . Smith is an employee of Company B with position S, and his attributes do not meet any access structure, then he cannot obtain the decryption private key. 3.7 User accesses data After the data owner sends the anonymous data set M and the series identification file ciphertext CB i to the data server, the user accesses the data through the data server. For an ordinary user, he can obtain anonymous data set M and identification file ciphertext CB i from the data server, because there is no key to decrypt the identification file ciphertext, he can only use uniform data with high anonymity, it can better protect the privacy of the data owner. For a privileged user, he can obtain the anonymous data set M and the identification file ciphertext CB i from the data server, he can obtain the corresponding key SK i from the trusted third party, he can decrypt the corresponding identification file ciphertext, he can perform de-anonymization operation and remove a certain amount of dummy information, he can obtain relatively accurate location data to improve the efficiency of data. The de-anonymization process is shown in . Example 5 . In the above example, the attributes of ordinary users Tom, Alice and Smith are not satisfied with any access structure, they do not obtain any decryption private key of the identification file, so they cannot perform de-anonymization operation. They can only use the data set with high anonymity M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }, so they cannot identify the real user location from M. The attributes of privileged users John and Martin can satisfy the access structure T 3 , they can obtain the decryption private key SK 3 of the identification file ciphertext CB 3 , and they can remove the dummy information {s 2 , s 3 , s 10 } from the anonymous data set M and obtain a more accurate data set M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 }. The attribute of privileged user Jack can satisfy the access structure T 2 , and he can obtain the decryption private key SK 2 of the identification file ciphertext CB 2 , he can remove the dummy information {s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } from the anonymous data set M, thereby obtaining a more accurate data set M = {s 7 , s 8 , s 9 }. The attribute of privileged user Jim is satisfied with the access structure T 1 , and he can obtain the decryption private key SK 1 of the identification file ciphertext CB 1 , then he can remove the dummy information {s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } from the anonymous data set M, thereby obtaining a data set M = {s 7 } with only real users. In the paper, we propose an access control strategy based on attribute encryption to encrypt identification files, and it forms the corresponding access structure according to the category of identification files. Only user whose attribute meets attribute conditions can decrypt identification files and de-anonymize data. When the data owner needs to adjust the user’s authority, it only needs to modify the access structure tree again, and then re-encrypt the identification file, which reduces the time cost for the data owner to regenerate and distribute the key. The specific execution process is shown in Algorithm 3. Algorithm 3: Attribute Based Encryption and Decryption Algorithm Input: Security parameter λ, Identification file B, Access tree T, User attribute S Output: Identification file cipher CB, Decryption key SK, Decryption file CA 1 (MK, PK) = Gen(λ) // Input security parameter λ to generate master key MK and public key PK . MK is kept by the data owner , PK is used to encrypt identification files . 2 CB = Encrypt(PK,T,B) // The identification file B is encrypted into ciphertext CB by public PK , accessing structure T . 3 SK = KeyGen(MK,CA) // The user’s private key SK is generated from the master key MK and user attribute value CA . 4 B = Decrypt(CB,SK) // Decrypt the ciphertext CB with the private key SK to obtain plaintext B . Decrypt() can only succeed if S satisfies T . The participants of attribute-based encryption access control system include data owner, trusted third party, user and service provider. The data owner owns the data and shares the data with other users through the service provider’s data service. The data owner is responsible for setting the access policy (access structure T), performing encryption algorithms to generate the identification file ciphertext bound to the policy, then he sends them to the service provider. The trusted third party is responsible for maintaining the correspondence between attributes and keys of each user, executing key generation algorithms to generate public keys PK and secret keys PK for data owners and generate attribute keys SK for access users. It is the only participant in the access control system that needs to be fully trusted by other participants. However, it is only responsible for sending the key and cannot access the ciphertext data. The user is the visitor of the data, if his attributes meet the policy requirements of the associated ciphertext, the trusted third party will generate the corresponding attribute key for him, then he can execute the decryption algorithm to obtain the plaintext of the identification file, and de-anonymize and precise access the anonymous data. The service provider is responsible for providing outsourcing storage of data and providing various operational services to users. It is honest and curious and will honestly perform various operations initiated by users, but it hopes to obtain more privacy content.
In the paper, we propose a method to achieve hierarchical privacy protection by adding dummy information layer by layer, generate and manage keys by using access control technology based on attribute encryption, and de-anonymize anonymous data sets by identifying files with dummy information, which can achieve bidirectional, reversible, multilevel and fine-grained protection of user location privacy data. The whole process includes the following seven steps: adding dummy information, generating dummy information identification files, setting access control policies, publishing anonymous data sets, encrypting identification files, generating attribute keys, and accessing data by users. The specific workflow is shown in . Adding dummy information. According to privacy protection requirements, the anonymity data is divided into N different levels, it is represented as L 0 ,L 1 ,L 2 ,L 3 ,…,L N-1 , which satisfies the condition L 0 <L 1 <L 2 <L 3 <…<L N-1 , where L 0 only contains real location information, L N-1 is the highest degree of anonymity, i.e., the final published anonymous data set. Greedy algorithm is used to add dummy location information, and randomly adds adjacent or connected locations to the current anonymous set. Generating identification files. According to the set anonymity level, while adding corresponding amount of dummy information hierarchically, the dummy information identification file is established hierarchically. The identification file B i and the anonymity level L i are one-to-one corresponding, and the lower anonymity level is, the more dummy location information is marked in the identification file. Then the more dummy information can be removed by de-anonymization, the more accurate data will be. The relationship between identification files is: B 1 ⊇B 2 ⊇B 3 ⊇…⊇B N-1 , and the number of dummy data in each level is |B i |. At each anonymity level, only dummy information is added and identification files are generated, and anonymous data is not published. Finally, a unified anonymous data set is generated, and N-1 identification files are generated. Setting access control policies. The data owner formulates the access policy of the dummy identification file and establishes the access structure tree. Only the user who meets the attribute set can obtain the decryption key and decrypt the ciphertext of identification file. The data owner may not know the identity of the potential authorized user and the associated key set in advance, and it can even keep the anonymity of authorized users in certain scenarios. Publishing anonymous data set. When the de-anonymization key is generated, it only encrypts the corresponding identification file, but does not encrypt the anonymous data set, so as to improve the efficiency of data processing. Data owner generates a unified anonymous dataset and publishes it along with a series of identifying file ciphertexts. All data users and attackers face the same anonymous data set, and privileged users can remove the corresponding level of dummy information by de-anonymizing the corresponding key. Encrypting identification files. Based on the access control structure tree, the data owner encrypts the identification file to generate a series of identification file ciphertext. Only user who meets the access attribute conditions can obtain the decryption key and decrypt the identification file ciphertext at the level. Generating attribute keys. According to the access control policy and the attribute certificate of the applicant user, the trusted third party generates key for decrypting ciphertext of identification files at different levels, and then sends the key to the corresponding user. User access data. For an ordinary user, because there is no key to decrypt the identification file ciphertext, he can only use unified data with high anonymity, so as to protect the privacy of users. For a privileged user, he can obtain the decryption key of the identification file ciphertext from the trusted third party, obtain relatively accurate location data to improve the efficiency of data.
Before the anonymization process, all road segments in the map are pre-assigned their connections in a conflict-free manner, and then it selects dummy information according to the pre-assigned connections, that is, all the road segments in the map will be pre-processed in a conflict-free manner to establish a road segment undirected graph. Then, based on the undirected graph, a corresponding adjacency table is generated, the header of the adjacency table is the connection point of the road segment, and the nodes in the adjacency table are all other connection points directly connected to the connection point. The adjacency table consists of vertex nodes and table nodes, the structure of vertex node is D = (ID, Name, FirstName, Note), where ID represents the segment number, Name represents the segment name, FirstName represents the name of first node directly connected to the vertex, and Note is comment information which is used to record whether the node is added to the anonymous collection. The structure of table node is E = (AdjuvexID, Info, NextName, Notes), where AdjuvexID represents the ID of node directly connected to the vertex, Info represents the relevant information field which is used to store information such as weight, and NextName is the segment pointing to the next directly connected node. When a dummy location point is added, a pseudo-random number is generated by a random function, and then a location point is selected by a hash function: H i ( K e y ) = K e y i m o d | A | (1) where Key i is the pseudo-random number generated, |A| is the number of nodes in the adjacency table, H i (Key) is used to select dummy node in the adjacency list. When it selectes the corresponding connection point in the adjacency list, it first checks the identification bit of the node, if the identification bit is 0, the node is selected. If the identifier bit is 1, it means that the node has been added to the anonymous set, and there is a conflict between the selected nodes, then the open addressing method is used to resolve the conflict and reselect a new node, that is, H i ( K e y ) = ( H ( K e y ) + d i ) m o d | A | (2) where i = 1,2,…,s, H(Key) is a hash function, d i is an incremental sequence, and the value selection method adopts a linear detection hashing method, d i = c×i, c = 1. It reselects nodes by resolving conflicts, if the conflict is not resolved, it continues to select a new node by using the conflict hash function until a node-free location is selected. After selecting the location node, it checks whether the selected connection point is directly connected to any node in the current anonymous set through the adjacency table, that is, it verifies whether the selected node is on the same adjacency table as any node in the anonymous set. If it is directly connected, the location point is added to the anonymous set, and its corresponding identification bit is marked as 1. Otherwise, there is a conflict, and it must generate new pseudo-random numbers, and then select and check the connectivity of new nodes. For example, in represents a city map, represents the constructed undirected graph, and the constructed position adjacency table is shown in . Therefore, for a new map, it first generates a location adjacency table corresponding to an undirected graph with algorithm 1, and then anonymizes it. Algorithm 1: Building a Location Adjacency Graph Input: Undirected Graph of links G = (N, E) Output: Position Adjacency List PL 1 int i,n, e; 2 int u,v; 3 ENode **a,*t; 4 cin>>n>>e; 5 a = new ENode*[n]; 6 for (i = 0;i<n;i++) 7 a[i] = NULL; 8 for (i = 0;i<e;i++) { 9 cin>>u>>v; 10 t = new ENode; 11 t->adjVex = v; 12 t->nextArc = a[u]; 13 a[u] = t; 14 t = new ENode; 15 t->adjVex = u; 16 t->nextArc = a[v]; 17 a[v] = t;} 18 for (i = 0;i<n;i++) { 19 cout<<“a[“<<i<<”]”; 20 t = a[i]; 21 while (t! = NULL) { 22 cout<<“->”<<t->adjVex; 23 t = t->nextArc;} 24 end;} 25 return PL
Based on the location adjacency table, to construct anonymous data sets. During anonymous processing, each anonymous request contains a profile identifying the user’s privacy protection requirements, which contains relevant parameters for privacy protection, denoted as (L, k , t , d ), where L represents the number of levels of anonymity protection, i.e. anonymity processing is divided into several protection levels. k represents the anonymity parameter, which specifies the number of other users contained in the current level of anonymity. t denotes a time threshold that specifies the maximum tolerated time for anonymous processing, d denotes the spatial threshold which specifies the range of maximum acceptable anonymous space. In the multilevel location privacy protection model, according to the privacy protection requirements and anonymity level, the anonymity level is divided into N levels, specifically expressed as L 0 ,L 1 ,L 2 ,L 3 ,…,L N-1 , and the anonymity parameter corresponding to the anonymity level L i is: k i = k 1 + ( i ‐ 1 ) k 1 = i • k 1 (3) Where 1≤i≤N, L 0 only contains real location information, and k 1 is the anonymous parameter corresponding to L 1 . The anonymity degree satisfies L 0 <L 1 <L 2 <L 3 <…<L N-1 , L N-1 is the most anonymous dataset, i.e. the final published anonymous dataset. Anonymity processing starts at L 1 , and then it constructs anonymous data sets based on anonymity-level configuration parameters ( k 1 , t 1 , d 1 ). Dummy location information is added to the data set which contains only the real user set L 0 . Firstly, k 1 location points which are selected directly connected with position locations in L 0 in the current adjacency table to construct anonymous data set M 1 at L 1 level. When dummy locations are added, a greedy algorithm is adopted to generate random numbers in a certain range by random function and hash function, and dummy locations are selected in the adjacency table by random numbers until the privacy configuration parameters are satisfied. If the anonymity condition of the user is still not satisfied with the set spatial threshold d , the anonymity is failed. Then, on the basis of L 1 , dummy information is continuously added to the anonymous set of M 1 until the configuration parameters ( k 2 , t 2 , d 2 ) of the anonymity level of L 2 are satisfied. At this time, the anonymous set constructed is M 2 , where M 1 ⊑M 2 . It is repeated until an L N-1 anonymous dataset M N-1 is constructed, where M 1 ⊑M 2 ⊑M 3 ⊑…⊑M N-1 . Finally, the anonymous dataset M N-1 is published. The anonymous set construction process is shown in Algorithm 2. Algorithm 2: Building an anonymous set Input: Privacy Configuration Parameters ( k , t , d ), Anonymity Level L i , Adjacency List PL Output: Anonymous set M, Identify File Set B 1 ∃ L i , i∈(0,N-1) 2 if i = 0 3 M i = {s 0 } // s 0 is the location of the real user 4 B i = ø 5 else 6 for (i = 1; i++; i<N) { 7 k i = i•k 8 H j (Key) = Key j mod |PL| // Generate random numbers to select location points 9 ∃ s j ∈PL, ∀s n ∈M i 10 if s j and s n are directly connected, then M i = M i ∪ {s j } 11 B i = B i ∪ {s j } 12 count = |M i | 13 if count<k i 14 H t (Key) = Key t mod |PL| 15 if s t in PL and s n in M i are directly connected then M i = M i ∪ {s t } 16 B i = B i ∪ {s j } 17 count = |M i | 18 end if 19 B = B∪{B i } 20} end for 21 return M N-1 , B Greedy algorithm is used to add dummy location information, which randomly adds adjacent or connected locations to the current anonymous set, and each dummy location is added which connects with at least one location in the current anonymous area. In the paper, the proposed method can effectively prevent the location information without connection from being added by hash function and adjacency table, to improve the effect of privacy protection. Because it uses a random function to select locations, all locations are selected with the same probability, so an attacker or data consumer cannot determine the exact location of the real user. Example 1 . In of the above example, assuming that the road segment s 7 contains the real location of the user, the set initially formed that only contains the real location is M 0 = {s 7 }, and it can be used as the anonymity level L 0 . If the initial privacy protection parameter is k = 3 and k is incremented for each level thereafter, the anonymity parameter of anonymity level L i is k i = k +(i-1) k = i•k, i.e. k i = 3+(i-1)•3 = 3i. For anonymity level L 1 , k 1 = i• k = 1•3 = 3, it selects a location point H in the location adjacency table which is directly connected to location point G in s 7 by a random function and a hash function, and the identification bit of the corresponding location point is set to 1. Then the corresponding segment s 8 is added to the anonymous set, and the anonymous set is M 1 = {s 7 , s 8 }. At this time |M 1 |< k 1 , it continues to add dummy location information. It selects the location point J which is directly connected to the location point G or H in s 7 in the location adjacency table, and the identification bit of the corresponding location point is set to 1. Then the corresponding segment s 9 is added to the anonymous set M 1 = {s 7 , s 8 , s 9 }, at this time |M 1 | = k 1 , the anonymity processing is completed, and the corresponding anonymity level is L 1 . For anonymity level L 2 , k 2 = i•k = 2•3 = 6, it selects location point A which is directly connected with location point in {s 7 , s 8 , s 9 } in location adjacency table, and the identification bit of its corresponding location point is set to 1. The corresponding segment s 4 is added to the anonymous set, the anonymous set is M 2 = {s 7 , s 8 , s 9 , s 4 }, where |M 2 |<k 2 , it continues to add dummy location information. Then, through random function and hash function, it selects position points F and I which are directly connected with location points in {s 7 , s 8 , s 9 , s 4 }, and it sets the identification bit of the corresponding location point to 1. The corresponding segments s 5 and s 11 are added to the anonymous set, the anonymous set is M 2 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 }, where |M 2 | = k 2 , the anonymity processing of this anonymity level is completed, and the corresponding anonymity level is L 2 . For anonymity level L 3 , k 3 = i•k = 3•3 = 9, it uses the same method to construct the anonymous set M 3 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }. When |M 3 | = k 3 , the anonymity processing is completed, and the corresponding anonymity level is L 3 . Finally, four levels of anonymous sets are formed. For L 0 , only the real user location is contained. From L 1 to L 3 , the degree of anonymity is gradually enhanced, and more and more dummy information is added to the anonymous set. Specifically expressed as: L 0 : M 0 = {s 7 } L 1 : M 1 = {s 7 , s 8 , s 9 } L 2 : M 2 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 } L 3 : M 3 = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } 3.4 The de-anonymized dummy information identifiers file is created The dummy information identification file is mainly used to identify the added dummy information at each anonymity level hierarchically, so that the corresponding dummy information can be accurately removed in the de-anonymization stage to obtain accurate data with different precisions. Therefore, it can ensure the reversibility of anonymous data and the bidirectional nature of anonymity process. The identification file corresponds to the anonymity level, i.e. each anonymity level corresponds to an identification file. The structure of the identification file is B = (ID,L,FID), where ID represents the serial number, L represents the degree of anonymity, and FID represents the identification of added dummy information. According to the anonymity level L 1 <L 2 <L 3 <…<L N-1 , while adding corresponding amount of dummy information hierarchically, the identification file of the dummy information is hierarchically established, and the identification file B i is corresponded to the anonymity level L i one by one. The lower anonymity level, the more dummy location information is marked in the identification file. The more dummy information is removed by deanonymization, the more accurate the data is obtained. The relationship between identification files is B 1 ⊇B 2 ⊇B 3 ⊇…⊇B N-1 , and the number of dummy data in each level is |B i |. Each anonymity only adds dummy information and generates identification files, without publishing anonymous data. Finally, a unified anonymous data set is generated, and N-1 identification files are generated. The identification file only marks the hierarchical level of privacy protection and the ID number of added dummy information. In order to improve the efficiency of encryption and decryption, only the identification file is encrypted. Then it is sent to the data server without encrypting the data, which is anonymized and distributed directly to all users. Example 2 . In the above example, an anonymous set with four anonymity degrees L 0 , L 1 , L 2 , and L 3 is constructed, and the final anonymous set is M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }. Where L 0 only contains real users, and the identification file sets corresponding to the other three levels are: B 1 = {s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }、B 2 = {s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }、B 3 = {s 2 , s 3 , s 10 }. Each identification file marks the added dummy location information to the final anonymous collection. The higher anonymity of the identification file, the less dummy location information is marked. For example, L 3 is more anonymous than L 2 , then the dummy information labeled is added in B 3 which is in the basis of L 2 , and the number of dummy information labeled is significantly less than B 2 .
In the paper, we propose a method to encrypt data by using attribute set as encryption parameters, and only user who meets the attribute set can obtain private key and decrypt data. Definition 1 Attribute. It assumes that A = { A 1 , A 2 , … , A n } is the set of all attributes, then each attribute S is a non-empty subset of A, S ∈ { A 1 , A 2 , … , A N } , then n attributes can identify 2 n users. Definition 2 Access Structure. It assumes that {P 1 , P 2 ,…, P n } is a set of participants, let S ⊆ 2 { P 1 , P 2 , … , P n } , if ∀ B,C, then B∈S, and B ⊆ C , then C∈S, S is said to be monotonic. S is called an access structure if S is monotonic and nonempty, S ⊆ 2 { P 1 , P 2 , … , P n ) { ∅ } , and the elements of S are called authorized sets. Access structure is mainly divided into threshold structure, attribute value and operation structure, access tree structure and LSSS matrix structure. At present, access tree structure is widely used in access control, which can be regarded as an extension of single-layer (t,n) threshold structure and supports AND, OR and (t,n) threshold operations. The (t,n) threshold means that the secret information is divided into n parts, and at least t parts must be obtained to reconstruct the secret information. The AND operation can be regarded as an (n,n) threshold, and the OR operation can be regarded as an (1,n) threshold. Definition 3 Access Tree. T is an access tree, each node is denoted by x in the tree, the number of child nodes of this node is denoted by n x , and its corresponding threshold value is denoted by k x . Each leaf node represents an attribute, and threshold values k x = 1, n x = 0. The relation between threshold value of non-leaf node and number of child node can be expressed by AND, OR and (t,n) threshold relation of attribute represented by leaf node, that is, k x = n x represents AND operation, k x = 1 represents OR operation, 0<k x < n x represents (t,n) threshold. Access tree is shown in . Example 3 . In the above example, the identity files B 1 , B 2 and B 3 are constructed according to the privacy protection layer level, the access trees corresponding to the access policies are set as T 1 , T 2 and T 3 respectively, as shown in . For the identification file ciphertext B 1 , the access tree is T 1 , and the access user must meet three conditions at the same time, that is, he belongs to company A, his position is M and his level is senior. The user who meets the access structure can use the identification file B 1 to remove dummy information. For example, Jim is an employee of Company A with position M and level senior, which meets the access structure. Tom is an employee of Company A with position M and level intermediate, which does not meet the access structure. For the identification file ciphertext B 2 , the access tree is T 2 , and the access user must meet two conditions at the same time, that is, he belongs to company A and his position is M. The user who meets the access structure can use the identification file B 2 to remove dummy information. For example, Jack is an employee of Company A with position M, which meets the access structure. Alice is an employee of Company A with position N, which does not meet the access structure. For the identification file ciphertext B 3 , the access tree is T 3 , and the access user must meet the conditions, that is, he belongs to company A, or he belongs to company B and his position is I. The user who meets the access structure can use the identification file B 3 to remove dummy information. For example, John is an employee of Company A, which meets the access structure. Martin is an employee of Company B with position I, which meets the access structure. Smith is an employee of Company B with position S, which does not meet the access structure.
The data owner encrypts the identification file B i to the ciphertext CB i with the public key PK and the access structure tree T i . Then he sends the master key MK and access tree T i to the trusted third party (TTP), while he sends the anonymous data set and identification file ciphertext to the data service provider (DSP). Data and attribute keys are stored separately in different servers to prevent privacy information from being leaked. When a privileged user u i wants to use more accurate anonymous data, he sends his Certificate Attribute (CA) to TTP and requests the decryption key SK to identification file ciphertext. According to the access structure tree T i and the attribute certificate CA i , the TTP generates a decryption private key SK i of the identification file ciphertext CB i and sends it to u i . Then u i uses Sk i to decrypt the identification file ciphertext CB i to obtain the dummy information identification B i , and he can remove the dummy information from the anonymous data set to obtain relatively accurate data. Example 4 . In the above example, Jim is an employee of Company A, his position is M, and his level is high, his attributes meet the access structure tree T 1 , so he can obtain the decryption private key SK 1 of the identification file ciphertext CB 1 . Tom is an employee of Company A, his position is M, and his level is intermediate, his attributes do not meet any access structure, then he cannot obtain the decryption private key. Jack is an employee of Company A, his position is M, his attributes meet the access structure tree T 2 , so he can obtain the decryption private key SK 2 of the identification file ciphertext CB 2 . Alice is an employee of Company A, her position is N, her attribute does not meet any access structure, then she cannot obtain the decryption private key. John is an employee of Company A, Martin is an employee of Company B with position I, their attributes both meet the access structure tree T 3 , so they can obtain the decryption private key SK 3 of the identification file ciphertext CB 3 . Smith is an employee of Company B with position S, and his attributes do not meet any access structure, then he cannot obtain the decryption private key.
After the data owner sends the anonymous data set M and the series identification file ciphertext CB i to the data server, the user accesses the data through the data server. For an ordinary user, he can obtain anonymous data set M and identification file ciphertext CB i from the data server, because there is no key to decrypt the identification file ciphertext, he can only use uniform data with high anonymity, it can better protect the privacy of the data owner. For a privileged user, he can obtain the anonymous data set M and the identification file ciphertext CB i from the data server, he can obtain the corresponding key SK i from the trusted third party, he can decrypt the corresponding identification file ciphertext, he can perform de-anonymization operation and remove a certain amount of dummy information, he can obtain relatively accurate location data to improve the efficiency of data. The de-anonymization process is shown in . Example 5 . In the above example, the attributes of ordinary users Tom, Alice and Smith are not satisfied with any access structure, they do not obtain any decryption private key of the identification file, so they cannot perform de-anonymization operation. They can only use the data set with high anonymity M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 }, so they cannot identify the real user location from M. The attributes of privileged users John and Martin can satisfy the access structure T 3 , they can obtain the decryption private key SK 3 of the identification file ciphertext CB 3 , and they can remove the dummy information {s 2 , s 3 , s 10 } from the anonymous data set M and obtain a more accurate data set M = {s 7 , s 8 , s 9 , s 4 , s 5 , s 11 }. The attribute of privileged user Jack can satisfy the access structure T 2 , and he can obtain the decryption private key SK 2 of the identification file ciphertext CB 2 , he can remove the dummy information {s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } from the anonymous data set M, thereby obtaining a more accurate data set M = {s 7 , s 8 , s 9 }. The attribute of privileged user Jim is satisfied with the access structure T 1 , and he can obtain the decryption private key SK 1 of the identification file ciphertext CB 1 , then he can remove the dummy information {s 8 , s 9 , s 4 , s 5 , s 11 , s 2 , s 3 , s 10 } from the anonymous data set M, thereby obtaining a data set M = {s 7 } with only real users. In the paper, we propose an access control strategy based on attribute encryption to encrypt identification files, and it forms the corresponding access structure according to the category of identification files. Only user whose attribute meets attribute conditions can decrypt identification files and de-anonymize data. When the data owner needs to adjust the user’s authority, it only needs to modify the access structure tree again, and then re-encrypt the identification file, which reduces the time cost for the data owner to regenerate and distribute the key. The specific execution process is shown in Algorithm 3. Algorithm 3: Attribute Based Encryption and Decryption Algorithm Input: Security parameter λ, Identification file B, Access tree T, User attribute S Output: Identification file cipher CB, Decryption key SK, Decryption file CA 1 (MK, PK) = Gen(λ) // Input security parameter λ to generate master key MK and public key PK . MK is kept by the data owner , PK is used to encrypt identification files . 2 CB = Encrypt(PK,T,B) // The identification file B is encrypted into ciphertext CB by public PK , accessing structure T . 3 SK = KeyGen(MK,CA) // The user’s private key SK is generated from the master key MK and user attribute value CA . 4 B = Decrypt(CB,SK) // Decrypt the ciphertext CB with the private key SK to obtain plaintext B . Decrypt() can only succeed if S satisfies T . The participants of attribute-based encryption access control system include data owner, trusted third party, user and service provider. The data owner owns the data and shares the data with other users through the service provider’s data service. The data owner is responsible for setting the access policy (access structure T), performing encryption algorithms to generate the identification file ciphertext bound to the policy, then he sends them to the service provider. The trusted third party is responsible for maintaining the correspondence between attributes and keys of each user, executing key generation algorithms to generate public keys PK and secret keys PK for data owners and generate attribute keys SK for access users. It is the only participant in the access control system that needs to be fully trusted by other participants. However, it is only responsible for sending the key and cannot access the ciphertext data. The user is the visitor of the data, if his attributes meet the policy requirements of the associated ciphertext, the trusted third party will generate the corresponding attribute key for him, then he can execute the decryption algorithm to obtain the plaintext of the identification file, and de-anonymize and precise access the anonymous data. The service provider is responsible for providing outsourcing storage of data and providing various operational services to users. It is honest and curious and will honestly perform various operations initiated by users, but it hopes to obtain more privacy content.
The proposed method supports multi-level location privacy protection based on LBS, allowing users to share anonymous location data at various granularities. Initially, a mobile user submits their actual location to a reversible anonymity system managed by a trusted LBS provider. Such providers act as functional modules to facilitate both reversible processing and fine-grained location anonymity. In scenarios involving untrusted LBS providers, this reversible anonymity process can be implemented via a trusted third-party anonymizer. The reversible de-anonymization process uses an attribute-based encryption access control policy. The data owner sets an access policy for each identifier file, generates a corresponding attribute access structure tree, and encrypts the identifier files using this tree as a parameter to generate ciphertexts. Users whose attributes align with the structure tree requirements can request a decryption key from the trusted third party to perform de-anonymization operations and access more precise data at specified privacy protection levels. The security services implemented by the proposed method cover five main areas. 4.1 Confidentiality A user’s real location is blended with randomly added dummy locations to ensure the real location remains private. Multi-level location privacy protection is achieved after adding varying amounts of dummy information at different hierarchical levels N . Data users of varying trust levels receive data with corresponding levels of anonymity but are unable to access the real location information. Ordinary users cannot decrypt encrypted identifier files to remove dummy information; instead, they access uniformly anonymized data. This facilitates robust, multi-level, multi-granularity protection of private location data. Therefore, the proposed method can ensure that the user’s privacy information is not leaked. 4.2 Multilevel privacy preserving According to the privacy protection requirements, multi-level location privacy protection is achieved after adding varying amounts of dummy information at different hierarchical levels N. The lower level of anonymity, the less dummy information is added. The higher level of anonymity, the more dummy information is added. The usage, encryption and decryption of dummy information identifier files are managed through an attribute-based encryption access control system. Only privileged users who align with the access structure tree criteria can obtain decryption keys, ensuring the privacy of both the keys and the real location data. Data users of varying trust levels receive data with corresponding levels of anonymity, it achieves multi-level location privacy protection. 4.3 Restorability In the paper, all dummy information which is added to anonymous set by the proposed method is marked hierarchically in identification file. For privileged users whose attributes align with the structure tree requirements can request a decryption key from the trusted third party to perform de-anonymization operations, they can remove dummy information in a certain extent, recover relatively accurate location data, and use data efficiently, which achieves the restorability. of location data. 4.4 Authentication The proposed method can achieve the identity authentication of privileged users through access control mechanism based on attribute encryption. The data owner sets the access control policy and generates the access structure tree. The trusted third party verifies the attribute certificate of the privileged user to confirm the user’s identity. When his attribute meets the access structure conditions, he can obtain the decryption key and perform de-anonymity processing. Trusted third party can prevent illegal users from obtaining privacy information by verifying user’s attribute certificate. 4.5 Availability The proposed method can ensure the availability of data, on the one hand, in the process of data anonymity, by constructing undirected graph and adjacency table, using hash function to select adjacent position points, adding different dummy information in different levels, which makes anonymous data to be highly available. On the other hand, privileged user whose attribute is satisfied with the access structure can de-anonymize anonymous data set to improve the accuracy of data use and ensure the higher data availability. In conclusion, the proposed method can guarantee the confidentiality, availability and authentication of data security services, it can not only provide multi-level privacy protection, but also achieve the recovery operation of location data after anonymity.
A user’s real location is blended with randomly added dummy locations to ensure the real location remains private. Multi-level location privacy protection is achieved after adding varying amounts of dummy information at different hierarchical levels N . Data users of varying trust levels receive data with corresponding levels of anonymity but are unable to access the real location information. Ordinary users cannot decrypt encrypted identifier files to remove dummy information; instead, they access uniformly anonymized data. This facilitates robust, multi-level, multi-granularity protection of private location data. Therefore, the proposed method can ensure that the user’s privacy information is not leaked.
According to the privacy protection requirements, multi-level location privacy protection is achieved after adding varying amounts of dummy information at different hierarchical levels N. The lower level of anonymity, the less dummy information is added. The higher level of anonymity, the more dummy information is added. The usage, encryption and decryption of dummy information identifier files are managed through an attribute-based encryption access control system. Only privileged users who align with the access structure tree criteria can obtain decryption keys, ensuring the privacy of both the keys and the real location data. Data users of varying trust levels receive data with corresponding levels of anonymity, it achieves multi-level location privacy protection.
In the paper, all dummy information which is added to anonymous set by the proposed method is marked hierarchically in identification file. For privileged users whose attributes align with the structure tree requirements can request a decryption key from the trusted third party to perform de-anonymization operations, they can remove dummy information in a certain extent, recover relatively accurate location data, and use data efficiently, which achieves the restorability. of location data.
The proposed method can achieve the identity authentication of privileged users through access control mechanism based on attribute encryption. The data owner sets the access control policy and generates the access structure tree. The trusted third party verifies the attribute certificate of the privileged user to confirm the user’s identity. When his attribute meets the access structure conditions, he can obtain the decryption key and perform de-anonymity processing. Trusted third party can prevent illegal users from obtaining privacy information by verifying user’s attribute certificate.
The proposed method can ensure the availability of data, on the one hand, in the process of data anonymity, by constructing undirected graph and adjacency table, using hash function to select adjacent position points, adding different dummy information in different levels, which makes anonymous data to be highly available. On the other hand, privileged user whose attribute is satisfied with the access structure can de-anonymize anonymous data set to improve the accuracy of data use and ensure the higher data availability. In conclusion, the proposed method can guarantee the confidentiality, availability and authentication of data security services, it can not only provide multi-level privacy protection, but also achieve the recovery operation of location data after anonymity.
In this part, we evaluate the performance of proposed method mainly from the anonymity success rate, data efficiency, anonymization computational overhead, de-anonymization computational overhead and the entropy of location set. Compared with other methods to verify the feasibility and effectiveness. 5.1 Data and experimental setup 5.1.1 Experimental data The experimental data used is Geolife dataset [ – ], which includes GPS trajectory data of 182 users over five years. Each location point contains latitude, longitude, time and other information, including 24,876,978 location points, 18,670 trajectories, with a total duration of 11,129 days and a total distance of 1,292,951 kilometers. The most majority of these data sets come from Beijing, with little data coming from Europe or the United States. The data includes a variety of social activities, such as work, home, entertainment and sports activities. 5.1.2 Experimental setup The experimental environment is Intel(R) Core(TM) i7 [email protected] GHz, 32GB memory, Windows 10–64 bit operating system, and the specific method is implemented by Python 3.7. Privacy protection level i N = 10, i = 1,2,…,10, i.e. the privacy protection level is set to 10, privacy protection parameters k i and spatial tolerance rate d are set to: k i = 10 N , d = ⌈ 40 k i N ⌉ , N = 1,2 … 10 The spatial tolerance d is a function of the privacy protection parameter k i and the privacy protection level N , where d is in meters (m) and the time tolerance t is fixed at 20 seconds. All experiments were repeated 100 times and the average was considered the results. 5.2 Experimental results and analysis The proposed method (referred to from here on as the BME-LPP was compared against the method proposed in paper (referred to from here on as the RPLE) and the method proposed in paper (referred to from here on as the TPS). The RPLE method uses the spatiotemporal anonymity model to reversibly disturb user location information to achieve reversible location privacy protection for mobile users. The TPS method combines k-1 similar trajectories with real trajectories to form a false trajectory region to realize k-anonymity of a given location. 5.2.1 Anonymity success rate Anonymity success rate is used to measure the ability of privacy protection methods to resist attacks. If the anonymity success rate is high, it is difficult for attackers to anonymously identify the true location of users. shows a comparison of anonymization success rates for the three methods. As can be seen from the figure, the anonymity success rate is decreased with the increasing of k . Because with the increase of k , more and more dummy information needs to be added in anonymous sets. Within the specified spatial tolerance d , it is increasingly difficult to select the enough qualified locations, so the anonymity success rate will be reduced. However, it can be seen from the figure that the proposed BME-LPP method achieves the highest anonymity success rate. On the one hand, the proposed method ensures that the user’s real location information is not leaked by adding position-related dummy information hierarchically, dummy information selection is still carried out by adding adjacent segments, security is also ensured by asymmetric anonymous encryption is to achieve security. On the other hand, by asymmetric encryption and attribute-based access control to encrypt the de-anonymized identification file before transmission, it can ensure that the actual location information of the user is not leaked. Therefore, the anonymous success rate of the proposed method is higher than other methods. The RPLE method proposes a method to construct anonymous sets by selecting the position points associated with the current anonymous set as dummy position points, which can also provide high anonymity success rate. However, its de-anonymization process is not encrypted, which will lead to the disclosure of private information. TPS method divides the real trajectory into sub-trajectories according to time sequence, then it searches for similar trajectory segments in historical trajectory data set, and it assembles similar sub-trajectory segments into false trajectory of user, it cannot resist similarity attacks, so its privacy protection effect is relatively low. 5.2.2 Data efficiency Data utilization efficiency mainly refers to the utilization efficiency of published location data after anonymity is used by the third party, and it also reflects the loss rate of location information in the process of anonymity. In the paper, data efficiency is indirectly measured by calculating the information loss of published location data, which is measured by calculating the ratio of the size of the final anonymous region to the size of the maximum allowable spatial region. The greater loss of information from the published data set, the less efficient data will be used. For privileged users, the proposed method can perform de-anonymization operation according to user’s permission, and it can partially or completely remove dummy information from data set, so the data availability can achieve 100%. For ordinary users, because it is impossible to decrypt the ciphertext of the identification file, it is impossible to remove dummy information and achieve complete data utilization. Therefore, in this section, we will discuss data availability of ordinary users and privileged users separately. shows that the anonymized data utilization efficiency is decreased as the privacy preserving parameter k is increased. With the increase of privacy protection parameters, the more dummy information is added. Then the disturbance to the data set will be increased, which will reduce the utilization of the data set. As can be seen from the figure, the proposed method BME-LPP achieves the highest data utilization. BME-LPP method constructs dummy data set by constructing position adjacency table to select location points associated with original location points, so that data characteristics in data set are consistent, and it can achieve high data availability. The RPLE method determines the selected position by generating a series of pseudo-random numbers with a key, and it uses a local expansion algorithm to achieve perturbation of the true position, the anonymous set formed has greater diversity. TPS method uses Euclidean distance to calculate the linear distance between positions, and there is some error in distinguishing similar positions. Therefore, their data availability is relatively lower. For privileged users, BME-LPP method can eliminate dummy information and achieve higher data availability. shows that the de-anonymized data utilization efficiency is increased as the de-anonymization level is increased. In the experiment, the de-anonymization level is set to N = 5, the initial privacy protection parameter is set to k 1 = 10, and the privacy protection parameter increment level is 10 for each anonymity level, i.e. k i = 10+(i-1)10 = 10•i,(1≤i≤N). As can be seen from the figure, when the de-anonymization level is 1, the dummy information removed is relatively small, and the data utilization efficiency is relatively low. When the de-anonymization level is increased, the more disinformation is removed, the more data utilization achieved. When the de-anonymization level is 5, the highest data utilization is achieved. 5.2.3 Anonymization computational overhead Anonymization computational is an important factor of service quality for user, and it is the most intuitive factor to measure the effect of experiments. Therefore, under the condition of anonymity, the smaller computational overhead, the better privacy protection. Under the condition of anonymity, the smaller computational overhead, the better privacy protection. shows a comparison of the computational overhead of the various methods. It can be seen from the figure that the computational overhead of all privacy preserving methods is increased with the increase of privacy preserving parameter k . Because with the increasing of k , more and more dummy information needs to be added in the anonymous set, and more and more qualified positions need to be selected, so the computational overhead will be increased continuously. It can also be seen from the figure that the computational overhead of the proposed BME-LPP method for anonymous processing is relatively low. Because the undirected graph and adjacency table are constructed to select dummy information, it can reduce the computational overhead, and the encryption of de-anonymized identification files can be performed offline, which also makes the computational overhead is relatively small. However, the RPLE needs longer anonymization runtime to construct collision-free links instantly, and it also needs to generate correlation keys in the anonymization process to ensure the reversibility of deanonymization processing, so the computational overhead is higher than the proposed BME-LPP method. TPS method needs to calculate semantic similarity, spatial similarity and temporal similarity between position trajectories, and its time complexity is the highest. shows that the computational overhead of anonymization is decreased with increasing of spatial tolerance d . With the increasing of d , the range of dummy locations is further expanded, and it is easier to select locations that meet the condition to construct anonymous sets, so the computational overhead is decreased. However, when the privacy parameter k is a constant, the computational overhead is not decreased significantly with the increasing of d , because the expansion of spatial region will increase the range of selected dummy location information in some extent, but it will not make the anonymity condition easier. As can be seen from the figure, there is not much difference in system computational overhead when d = 1000 and d = 1200. Therefore, the computational overhead of the proposed BME-LPP method is decreased continuously with the increase of d , but it ss not decreased indefinitely and converged gradually to a certain critical value. 5.2.4 De-anonymization computational overhead De-anonymization computational overhead is also an important factor to measure the quality of service. Under the condition of ensuring privacy protection effect, the smaller computational overhead, the higher efficiency of privacy protection method is. shows that the computational overhead of de-anonymization is increased slowly when the anonymity parameter k is increased. When k is increased, more dummy information is added to the anonymous set. Then in the deanonymization phase, more dummy information will be removed, which will lead to increased computational overhead. It can also be seen from the figure that the de-anonymization efficiency of the proposed method BME-LPP is better than the RPLE method. Because BME-LPP can directly remove dummy information according to dummy information identification files, there is no complicated calculation in the whole de-anonymization process, so its execution effect is less. Although the computational overhead is increased slightly, the gap of them is extremely small for the overall system. Because in the de-anonymization process, the marked dummy information in the deanonymized identification file is directly removed, the computational overhead is inherently small in the process, and the difference of them can be ignored. However, RPLE method needs to use the secret key to calculate and select the dummy position information through the transformation matrix, which will increase the time complexity. shows that the computational overhead of de-anonymization is increased slightly as the spatial tolerance d is increased, but it is not significantly. When the anonymity parameter k is a constant, the de-anonymization time does not change with d . When the d is increased, the range of dummy location selection is expanded, it is easier to select locations which meet the condition to construct anonymous sets, but it has little effect on de-anonymization processing, and the computational overhead is mainly used to remove the dummy location information. Therefore, the computational overhead of the proposed BME-LPP method does not change much with the increasing of spatial tolerance d . 5.2.5 Location set entropy Location set entropy is used to measure the uncertainty of user being identified in anonymous set. The higher entropy, the higher similarity of user locations. The greater uncertainty of the user’s true information is inferred by the attacker, the better effect of privacy protection. The formula for calculating the entropy of location set is: H = − ∑ i = 1 k p i ∙ l n p i . shows that the entropy of location set of proposed method is increased with the increase of the privacy preserving parameter k . The higher the entropy, the less probability an attacker would be able to identify the user’s true location. Because the higher similarity of location points in the set, the greater uncertainty that true location is identified, the better privacy protection effect. 5.3 Experimental results Experimental results show that the proposed BME-LPP method can provide bidirectional reversible and multi-layered privacy protection. It can solve the encryption management of de-anonymized identification files and the generation and distribution of attribute keys by using access control method based on attribute encryption. The BME-LPP method can refine anonymous data in different degrees by using the dummy information identification files, and to recover more accurate user data, so it can provide the higher data utilization while protecting user location privacy. At the same time, although the proposed method can provide multi-level privacy protection, it only publishes an anonymous data set, and only identifies dummy information hierarchically in the anonymization process, so the processing time will not be increased. Moreover, the encryption transmission of identification file can be carried out offline or independently, so the computational overhead is smaller than the encryption methods in reference . Although reference provides good privacy protection effect, the query content is encrypted and decrypted, which will increase the response time to a certain extent. Reference proposes an L-clustering algorithm based on differential privacy protection, which clusters user’s long-time stay points, high-frequency stay points and sensitive location points, but the data utilization rate is lower. Reference can provide multi-level privacy protection, it creates a list of transformations for each location point, which is spatially complex. Dynamically maintain the adjacent position relationship of each location point, time complexity is high. Reference proposes a differential privacy protection method, it adds noise to the resident points and centroids in the cluster, which can provide a certain degree of privacy protection, but the addition of noise reduces data utilization. Reference selects pseudo-offset locations to construct secure anonymous sets, reference improves privacy protection by replacing sensitive staying areas of users, but their computational overhead is large. A comparison of the proposed BME-LPP method with other location privacy preserving methods is shown in .
5.1.1 Experimental data The experimental data used is Geolife dataset [ – ], which includes GPS trajectory data of 182 users over five years. Each location point contains latitude, longitude, time and other information, including 24,876,978 location points, 18,670 trajectories, with a total duration of 11,129 days and a total distance of 1,292,951 kilometers. The most majority of these data sets come from Beijing, with little data coming from Europe or the United States. The data includes a variety of social activities, such as work, home, entertainment and sports activities. 5.1.2 Experimental setup The experimental environment is Intel(R) Core(TM) i7 [email protected] GHz, 32GB memory, Windows 10–64 bit operating system, and the specific method is implemented by Python 3.7. Privacy protection level i N = 10, i = 1,2,…,10, i.e. the privacy protection level is set to 10, privacy protection parameters k i and spatial tolerance rate d are set to: k i = 10 N , d = ⌈ 40 k i N ⌉ , N = 1,2 … 10 The spatial tolerance d is a function of the privacy protection parameter k i and the privacy protection level N , where d is in meters (m) and the time tolerance t is fixed at 20 seconds. All experiments were repeated 100 times and the average was considered the results.
The experimental data used is Geolife dataset [ – ], which includes GPS trajectory data of 182 users over five years. Each location point contains latitude, longitude, time and other information, including 24,876,978 location points, 18,670 trajectories, with a total duration of 11,129 days and a total distance of 1,292,951 kilometers. The most majority of these data sets come from Beijing, with little data coming from Europe or the United States. The data includes a variety of social activities, such as work, home, entertainment and sports activities.
The experimental environment is Intel(R) Core(TM) i7 [email protected] GHz, 32GB memory, Windows 10–64 bit operating system, and the specific method is implemented by Python 3.7. Privacy protection level i N = 10, i = 1,2,…,10, i.e. the privacy protection level is set to 10, privacy protection parameters k i and spatial tolerance rate d are set to: k i = 10 N , d = ⌈ 40 k i N ⌉ , N = 1,2 … 10 The spatial tolerance d is a function of the privacy protection parameter k i and the privacy protection level N , where d is in meters (m) and the time tolerance t is fixed at 20 seconds. All experiments were repeated 100 times and the average was considered the results.
The proposed method (referred to from here on as the BME-LPP was compared against the method proposed in paper (referred to from here on as the RPLE) and the method proposed in paper (referred to from here on as the TPS). The RPLE method uses the spatiotemporal anonymity model to reversibly disturb user location information to achieve reversible location privacy protection for mobile users. The TPS method combines k-1 similar trajectories with real trajectories to form a false trajectory region to realize k-anonymity of a given location. 5.2.1 Anonymity success rate Anonymity success rate is used to measure the ability of privacy protection methods to resist attacks. If the anonymity success rate is high, it is difficult for attackers to anonymously identify the true location of users. shows a comparison of anonymization success rates for the three methods. As can be seen from the figure, the anonymity success rate is decreased with the increasing of k . Because with the increase of k , more and more dummy information needs to be added in anonymous sets. Within the specified spatial tolerance d , it is increasingly difficult to select the enough qualified locations, so the anonymity success rate will be reduced. However, it can be seen from the figure that the proposed BME-LPP method achieves the highest anonymity success rate. On the one hand, the proposed method ensures that the user’s real location information is not leaked by adding position-related dummy information hierarchically, dummy information selection is still carried out by adding adjacent segments, security is also ensured by asymmetric anonymous encryption is to achieve security. On the other hand, by asymmetric encryption and attribute-based access control to encrypt the de-anonymized identification file before transmission, it can ensure that the actual location information of the user is not leaked. Therefore, the anonymous success rate of the proposed method is higher than other methods. The RPLE method proposes a method to construct anonymous sets by selecting the position points associated with the current anonymous set as dummy position points, which can also provide high anonymity success rate. However, its de-anonymization process is not encrypted, which will lead to the disclosure of private information. TPS method divides the real trajectory into sub-trajectories according to time sequence, then it searches for similar trajectory segments in historical trajectory data set, and it assembles similar sub-trajectory segments into false trajectory of user, it cannot resist similarity attacks, so its privacy protection effect is relatively low. 5.2.2 Data efficiency Data utilization efficiency mainly refers to the utilization efficiency of published location data after anonymity is used by the third party, and it also reflects the loss rate of location information in the process of anonymity. In the paper, data efficiency is indirectly measured by calculating the information loss of published location data, which is measured by calculating the ratio of the size of the final anonymous region to the size of the maximum allowable spatial region. The greater loss of information from the published data set, the less efficient data will be used. For privileged users, the proposed method can perform de-anonymization operation according to user’s permission, and it can partially or completely remove dummy information from data set, so the data availability can achieve 100%. For ordinary users, because it is impossible to decrypt the ciphertext of the identification file, it is impossible to remove dummy information and achieve complete data utilization. Therefore, in this section, we will discuss data availability of ordinary users and privileged users separately. shows that the anonymized data utilization efficiency is decreased as the privacy preserving parameter k is increased. With the increase of privacy protection parameters, the more dummy information is added. Then the disturbance to the data set will be increased, which will reduce the utilization of the data set. As can be seen from the figure, the proposed method BME-LPP achieves the highest data utilization. BME-LPP method constructs dummy data set by constructing position adjacency table to select location points associated with original location points, so that data characteristics in data set are consistent, and it can achieve high data availability. The RPLE method determines the selected position by generating a series of pseudo-random numbers with a key, and it uses a local expansion algorithm to achieve perturbation of the true position, the anonymous set formed has greater diversity. TPS method uses Euclidean distance to calculate the linear distance between positions, and there is some error in distinguishing similar positions. Therefore, their data availability is relatively lower. For privileged users, BME-LPP method can eliminate dummy information and achieve higher data availability. shows that the de-anonymized data utilization efficiency is increased as the de-anonymization level is increased. In the experiment, the de-anonymization level is set to N = 5, the initial privacy protection parameter is set to k 1 = 10, and the privacy protection parameter increment level is 10 for each anonymity level, i.e. k i = 10+(i-1)10 = 10•i,(1≤i≤N). As can be seen from the figure, when the de-anonymization level is 1, the dummy information removed is relatively small, and the data utilization efficiency is relatively low. When the de-anonymization level is increased, the more disinformation is removed, the more data utilization achieved. When the de-anonymization level is 5, the highest data utilization is achieved. 5.2.3 Anonymization computational overhead Anonymization computational is an important factor of service quality for user, and it is the most intuitive factor to measure the effect of experiments. Therefore, under the condition of anonymity, the smaller computational overhead, the better privacy protection. Under the condition of anonymity, the smaller computational overhead, the better privacy protection. shows a comparison of the computational overhead of the various methods. It can be seen from the figure that the computational overhead of all privacy preserving methods is increased with the increase of privacy preserving parameter k . Because with the increasing of k , more and more dummy information needs to be added in the anonymous set, and more and more qualified positions need to be selected, so the computational overhead will be increased continuously. It can also be seen from the figure that the computational overhead of the proposed BME-LPP method for anonymous processing is relatively low. Because the undirected graph and adjacency table are constructed to select dummy information, it can reduce the computational overhead, and the encryption of de-anonymized identification files can be performed offline, which also makes the computational overhead is relatively small. However, the RPLE needs longer anonymization runtime to construct collision-free links instantly, and it also needs to generate correlation keys in the anonymization process to ensure the reversibility of deanonymization processing, so the computational overhead is higher than the proposed BME-LPP method. TPS method needs to calculate semantic similarity, spatial similarity and temporal similarity between position trajectories, and its time complexity is the highest. shows that the computational overhead of anonymization is decreased with increasing of spatial tolerance d . With the increasing of d , the range of dummy locations is further expanded, and it is easier to select locations that meet the condition to construct anonymous sets, so the computational overhead is decreased. However, when the privacy parameter k is a constant, the computational overhead is not decreased significantly with the increasing of d , because the expansion of spatial region will increase the range of selected dummy location information in some extent, but it will not make the anonymity condition easier. As can be seen from the figure, there is not much difference in system computational overhead when d = 1000 and d = 1200. Therefore, the computational overhead of the proposed BME-LPP method is decreased continuously with the increase of d , but it ss not decreased indefinitely and converged gradually to a certain critical value. 5.2.4 De-anonymization computational overhead De-anonymization computational overhead is also an important factor to measure the quality of service. Under the condition of ensuring privacy protection effect, the smaller computational overhead, the higher efficiency of privacy protection method is. shows that the computational overhead of de-anonymization is increased slowly when the anonymity parameter k is increased. When k is increased, more dummy information is added to the anonymous set. Then in the deanonymization phase, more dummy information will be removed, which will lead to increased computational overhead. It can also be seen from the figure that the de-anonymization efficiency of the proposed method BME-LPP is better than the RPLE method. Because BME-LPP can directly remove dummy information according to dummy information identification files, there is no complicated calculation in the whole de-anonymization process, so its execution effect is less. Although the computational overhead is increased slightly, the gap of them is extremely small for the overall system. Because in the de-anonymization process, the marked dummy information in the deanonymized identification file is directly removed, the computational overhead is inherently small in the process, and the difference of them can be ignored. However, RPLE method needs to use the secret key to calculate and select the dummy position information through the transformation matrix, which will increase the time complexity. shows that the computational overhead of de-anonymization is increased slightly as the spatial tolerance d is increased, but it is not significantly. When the anonymity parameter k is a constant, the de-anonymization time does not change with d . When the d is increased, the range of dummy location selection is expanded, it is easier to select locations which meet the condition to construct anonymous sets, but it has little effect on de-anonymization processing, and the computational overhead is mainly used to remove the dummy location information. Therefore, the computational overhead of the proposed BME-LPP method does not change much with the increasing of spatial tolerance d . 5.2.5 Location set entropy Location set entropy is used to measure the uncertainty of user being identified in anonymous set. The higher entropy, the higher similarity of user locations. The greater uncertainty of the user’s true information is inferred by the attacker, the better effect of privacy protection. The formula for calculating the entropy of location set is: H = − ∑ i = 1 k p i ∙ l n p i . shows that the entropy of location set of proposed method is increased with the increase of the privacy preserving parameter k . The higher the entropy, the less probability an attacker would be able to identify the user’s true location. Because the higher similarity of location points in the set, the greater uncertainty that true location is identified, the better privacy protection effect.
Anonymity success rate is used to measure the ability of privacy protection methods to resist attacks. If the anonymity success rate is high, it is difficult for attackers to anonymously identify the true location of users. shows a comparison of anonymization success rates for the three methods. As can be seen from the figure, the anonymity success rate is decreased with the increasing of k . Because with the increase of k , more and more dummy information needs to be added in anonymous sets. Within the specified spatial tolerance d , it is increasingly difficult to select the enough qualified locations, so the anonymity success rate will be reduced. However, it can be seen from the figure that the proposed BME-LPP method achieves the highest anonymity success rate. On the one hand, the proposed method ensures that the user’s real location information is not leaked by adding position-related dummy information hierarchically, dummy information selection is still carried out by adding adjacent segments, security is also ensured by asymmetric anonymous encryption is to achieve security. On the other hand, by asymmetric encryption and attribute-based access control to encrypt the de-anonymized identification file before transmission, it can ensure that the actual location information of the user is not leaked. Therefore, the anonymous success rate of the proposed method is higher than other methods. The RPLE method proposes a method to construct anonymous sets by selecting the position points associated with the current anonymous set as dummy position points, which can also provide high anonymity success rate. However, its de-anonymization process is not encrypted, which will lead to the disclosure of private information. TPS method divides the real trajectory into sub-trajectories according to time sequence, then it searches for similar trajectory segments in historical trajectory data set, and it assembles similar sub-trajectory segments into false trajectory of user, it cannot resist similarity attacks, so its privacy protection effect is relatively low.
Data utilization efficiency mainly refers to the utilization efficiency of published location data after anonymity is used by the third party, and it also reflects the loss rate of location information in the process of anonymity. In the paper, data efficiency is indirectly measured by calculating the information loss of published location data, which is measured by calculating the ratio of the size of the final anonymous region to the size of the maximum allowable spatial region. The greater loss of information from the published data set, the less efficient data will be used. For privileged users, the proposed method can perform de-anonymization operation according to user’s permission, and it can partially or completely remove dummy information from data set, so the data availability can achieve 100%. For ordinary users, because it is impossible to decrypt the ciphertext of the identification file, it is impossible to remove dummy information and achieve complete data utilization. Therefore, in this section, we will discuss data availability of ordinary users and privileged users separately. shows that the anonymized data utilization efficiency is decreased as the privacy preserving parameter k is increased. With the increase of privacy protection parameters, the more dummy information is added. Then the disturbance to the data set will be increased, which will reduce the utilization of the data set. As can be seen from the figure, the proposed method BME-LPP achieves the highest data utilization. BME-LPP method constructs dummy data set by constructing position adjacency table to select location points associated with original location points, so that data characteristics in data set are consistent, and it can achieve high data availability. The RPLE method determines the selected position by generating a series of pseudo-random numbers with a key, and it uses a local expansion algorithm to achieve perturbation of the true position, the anonymous set formed has greater diversity. TPS method uses Euclidean distance to calculate the linear distance between positions, and there is some error in distinguishing similar positions. Therefore, their data availability is relatively lower. For privileged users, BME-LPP method can eliminate dummy information and achieve higher data availability. shows that the de-anonymized data utilization efficiency is increased as the de-anonymization level is increased. In the experiment, the de-anonymization level is set to N = 5, the initial privacy protection parameter is set to k 1 = 10, and the privacy protection parameter increment level is 10 for each anonymity level, i.e. k i = 10+(i-1)10 = 10•i,(1≤i≤N). As can be seen from the figure, when the de-anonymization level is 1, the dummy information removed is relatively small, and the data utilization efficiency is relatively low. When the de-anonymization level is increased, the more disinformation is removed, the more data utilization achieved. When the de-anonymization level is 5, the highest data utilization is achieved.
Anonymization computational is an important factor of service quality for user, and it is the most intuitive factor to measure the effect of experiments. Therefore, under the condition of anonymity, the smaller computational overhead, the better privacy protection. Under the condition of anonymity, the smaller computational overhead, the better privacy protection. shows a comparison of the computational overhead of the various methods. It can be seen from the figure that the computational overhead of all privacy preserving methods is increased with the increase of privacy preserving parameter k . Because with the increasing of k , more and more dummy information needs to be added in the anonymous set, and more and more qualified positions need to be selected, so the computational overhead will be increased continuously. It can also be seen from the figure that the computational overhead of the proposed BME-LPP method for anonymous processing is relatively low. Because the undirected graph and adjacency table are constructed to select dummy information, it can reduce the computational overhead, and the encryption of de-anonymized identification files can be performed offline, which also makes the computational overhead is relatively small. However, the RPLE needs longer anonymization runtime to construct collision-free links instantly, and it also needs to generate correlation keys in the anonymization process to ensure the reversibility of deanonymization processing, so the computational overhead is higher than the proposed BME-LPP method. TPS method needs to calculate semantic similarity, spatial similarity and temporal similarity between position trajectories, and its time complexity is the highest. shows that the computational overhead of anonymization is decreased with increasing of spatial tolerance d . With the increasing of d , the range of dummy locations is further expanded, and it is easier to select locations that meet the condition to construct anonymous sets, so the computational overhead is decreased. However, when the privacy parameter k is a constant, the computational overhead is not decreased significantly with the increasing of d , because the expansion of spatial region will increase the range of selected dummy location information in some extent, but it will not make the anonymity condition easier. As can be seen from the figure, there is not much difference in system computational overhead when d = 1000 and d = 1200. Therefore, the computational overhead of the proposed BME-LPP method is decreased continuously with the increase of d , but it ss not decreased indefinitely and converged gradually to a certain critical value.
De-anonymization computational overhead is also an important factor to measure the quality of service. Under the condition of ensuring privacy protection effect, the smaller computational overhead, the higher efficiency of privacy protection method is. shows that the computational overhead of de-anonymization is increased slowly when the anonymity parameter k is increased. When k is increased, more dummy information is added to the anonymous set. Then in the deanonymization phase, more dummy information will be removed, which will lead to increased computational overhead. It can also be seen from the figure that the de-anonymization efficiency of the proposed method BME-LPP is better than the RPLE method. Because BME-LPP can directly remove dummy information according to dummy information identification files, there is no complicated calculation in the whole de-anonymization process, so its execution effect is less. Although the computational overhead is increased slightly, the gap of them is extremely small for the overall system. Because in the de-anonymization process, the marked dummy information in the deanonymized identification file is directly removed, the computational overhead is inherently small in the process, and the difference of them can be ignored. However, RPLE method needs to use the secret key to calculate and select the dummy position information through the transformation matrix, which will increase the time complexity. shows that the computational overhead of de-anonymization is increased slightly as the spatial tolerance d is increased, but it is not significantly. When the anonymity parameter k is a constant, the de-anonymization time does not change with d . When the d is increased, the range of dummy location selection is expanded, it is easier to select locations which meet the condition to construct anonymous sets, but it has little effect on de-anonymization processing, and the computational overhead is mainly used to remove the dummy location information. Therefore, the computational overhead of the proposed BME-LPP method does not change much with the increasing of spatial tolerance d .
Location set entropy is used to measure the uncertainty of user being identified in anonymous set. The higher entropy, the higher similarity of user locations. The greater uncertainty of the user’s true information is inferred by the attacker, the better effect of privacy protection. The formula for calculating the entropy of location set is: H = − ∑ i = 1 k p i ∙ l n p i . shows that the entropy of location set of proposed method is increased with the increase of the privacy preserving parameter k . The higher the entropy, the less probability an attacker would be able to identify the user’s true location. Because the higher similarity of location points in the set, the greater uncertainty that true location is identified, the better privacy protection effect.
Experimental results show that the proposed BME-LPP method can provide bidirectional reversible and multi-layered privacy protection. It can solve the encryption management of de-anonymized identification files and the generation and distribution of attribute keys by using access control method based on attribute encryption. The BME-LPP method can refine anonymous data in different degrees by using the dummy information identification files, and to recover more accurate user data, so it can provide the higher data utilization while protecting user location privacy. At the same time, although the proposed method can provide multi-level privacy protection, it only publishes an anonymous data set, and only identifies dummy information hierarchically in the anonymization process, so the processing time will not be increased. Moreover, the encryption transmission of identification file can be carried out offline or independently, so the computational overhead is smaller than the encryption methods in reference . Although reference provides good privacy protection effect, the query content is encrypted and decrypted, which will increase the response time to a certain extent. Reference proposes an L-clustering algorithm based on differential privacy protection, which clusters user’s long-time stay points, high-frequency stay points and sensitive location points, but the data utilization rate is lower. Reference can provide multi-level privacy protection, it creates a list of transformations for each location point, which is spatially complex. Dynamically maintain the adjacent position relationship of each location point, time complexity is high. Reference proposes a differential privacy protection method, it adds noise to the resident points and centroids in the cluster, which can provide a certain degree of privacy protection, but the addition of noise reduces data utilization. Reference selects pseudo-offset locations to construct secure anonymous sets, reference improves privacy protection by replacing sensitive staying areas of users, but their computational overhead is large. A comparison of the proposed BME-LPP method with other location privacy preserving methods is shown in .
In the paper, in order to solve the problem that the single-layer, one-way and coarse-grained privacy protection for location-sensitive data cannot meet the actual privacy protection requirements of users, a bidirectional multi-layered location privacy protection method based on attribute encryption is proposed. It can settle the coarse-grained privacy protection problem caused by the "all" or "none" rigid privacy protection. The proposed method can achieve bidirectional processing of location privacy protection, including both anonymized privacy protection and de-anonymized data availability refinement. When dummy information is added to protect user data anonymously, a series of identification files marked with dummy information are generated, and privileged users can use them to carry out multi-level de-anonymization processing and obtain more accurate user data. The proposed method uses Hash function to generate random numbers and select dummy information to improve privacy protection effect. It uses attribute-based encryption access control method to encrypt and manage dummy information identification files. It generates decryption keys based on user attributes, so that users with different trust levels can perform different de-anonymization operations. It can improve the efficiency of information processing while providing multi-level privacy protection. Experimental results on real data sets show that proposed method has low computational overhead, high anonymity success rate and data utilization efficiency. However, the propose method is mainly applicable to the scenario where there is only one trusted third party in the system, while there may be multiple trusted authority in distributed scenarios, multiple attribute authorities need to be supported simultaneously, and each authority can issue attribute keys independently to support distributed multi-attribute application scenarios. Therefore, our next work is to research reversible multi-layer encryption schemes for distributed multi-attribute application scenarios.
|
Prevalence and Antimicrobial Susceptibility of | ecbc78df-d55e-4ffc-a22d-a6ea3703c5a7 | 11944474 | Microbiology[mh] | The genus Salmonella belongs to the family Enterobacteriaceae and consists of two species, S. enterica and S. bongori . So far, a total of 2659 Salmonella serovars have been identified, of which 2637 serovars belong to S. enterica . The species S. enterica consists of typhoidal and non-typhoidal Salmonella (NTS) and is responsible for the majority of human and animal Salmonella infections. NTS is one of the most frequently reported foodborne pathogens that cause diarrhea around the world, which is responsible for 180 million diarrhea cases yearly . Livestock and poultry meat are the major Salmonella sources of human infections, and meat and meat products can be contaminated by Salmonella from farm to fork. In China, Salmonella accounts for around 70–80% bacterial foodborne disease outbreaks and is one of the top two bacterial pathogens causing diarrhea [ , , ]. In addition, there is a significant consumption of pork and chicken meat in China. Sichuan Province stands as a prominent producer for both pork and chicken. Therefore, conducting research on Salmonella contamination in these meats will assist in identifying potential risks. Recent years, the emergence and dissemination of multi-drug-resistant (MDR) Salmonella strains in animals and humans has become one of the top global public health threats. Especially, a multi-drug-resistant Salmonella Infantis ( S. Infantis) strain that carries a pESI-like mega-plasmid, displaying escalating global prevalence . This strain exhibits enhanced fitness, antimicrobial resistance (e.g., resistant to third-generation cephalosporins and quinolones), and other concerning traits (e.g., disinfectant resistance), posing a significant threat to human health . In meat production, using disinfectants like quaternary ammonium or potassium persulfate is an effective measure to prevent microbial contamination including Salmonella . However, the improper use or overuse of disinfectants may lead to Salmonella developing resistance to disinfectants, resulting in a decrease in the disinfection efficiency and frequent cross-contamination in meat production [ , , , ]. Furthermore, studies have shown that the antibiotic susceptibility of Salmonella isolates could be changed under certain disinfectant stress, i.e., co-resistance of disinfectant-resistant bacteria to antibiotics . In this way, eliminating Salmonella especially MDR Salmonella is becoming more and more challenging, posing great threat to public health. To date, there have been no reports of the emerging S. Infantis strain in China; instead, S. Indiana has been frequently reported . Therefore, in order to ascertain if the emerging S. Infantis strain is prevalent in Sichuan Province and to investigate the prevalent serovars of Salmonella within the region, in this study, we conducted an investigation into Salmonella contamination in pork and chicken meat sold at a local wet market and different supermarkets in Chengdu, Sichuan Province. Antibiotic susceptibility tests were performed on the isolated strains, and the minimum inhibitory concentration (MIC) for disinfectants were determined, with the aim of providing essential data that can serve as a basis for pig and poultry farming as well as production practices.
2.1. Sample Collection From March to April 2023, a total of 156 samples including chicken ( n = 96) and pork ( n = 60) were collected from three supermarkets and a local wet market in Pidu district of Chengdu, Sichuan Province. Each sample was weighed, labeled, and placed into separate sterile bags and then immediately transported to the laboratory at low temperatures and processed within 4 h. Pork samples were pre-treated in accordance with the National Food Safety Standard GB 4789.4-2016 . Chicken samples were pre-treated following the methods described by Hou et al. . The details of samples are shown in . 2.2. Salmonella Isolation and Identification The isolation and identification of Salmonella were carried out according to GB 4789.4-2016 . Briefly, pre-treated samples were transferred to tetrathionate broth (TTB) and selenite cysteine (SC) broth (Hi-Tech Industrial Park Hope Bio-Technology, Qingdao, China) and incubated at 42 °C and 37 °C for 18–24 h, respectively. After enrichment, TTB or SC broth was picked and streaked on xylose lysine desoxycholate (XLD) agar (Hi-Tech Industrial Park Hope Bio-Technology, Qingdao, China), then incubated at 37 °C for 18 h. One or two colonies that appeared on the XLD agar and had suspicious appearance were picked from the XLD agar for further identification by double PCR assay, which would simultaneously amplify two specific genes ( invA and hut ) of Salmonella in a single PCR reaction. Briefly, synthesized primers amplifying invA gene (invA-F: 5′-GTGAAATTATCGCCACGTTCGGGCAA-3′, invA-R: 5′-TCATCGCACCGTCAAAGGAACC-3′) and hut gene (hut-F: 5′-ACTGGCGTTATCCCTTTCTCTGCTG-3′, hut-R: 5′-ATGTTGTCCTGCCCCTGGTAAGAGA-3′) were added into the PCR reaction mixture; PCR amplification (Biometra, Gottingen, Germany) was performed under the following conditions: initial denaturation at 94 °C for 5 min; 40 cycles of denaturation at 94 °C for 40 s, annealing at 60 °C for 40 s, and extension at 72 °C for 50 s; followed by a final extension at 72 °C for 5 min . Final PCR products were visualized in 1.5% agarose gel, and two bands of 284 bp and 495 bp would be observed from the positive Salmonella strains. In addition, the O and H antigens of Salmonella were serotyped by slide agglutination test using commercially available antisera following the manufacturer’s instructions (Tianrun Bio-Pharmaceutical, Ningbo, China). 2.3. Antimicrobial Susceptibility Test After identifying Salmonella isolates, the susceptibility of all Salmonella isolates to a panel of 10 antibiotics was determined using the standard Kirby–Bauer disk diffusion method, as recommended by the Clinical and Laboratory Standards Institute (CLSI 2022) . The doses and names of the antibiotics were as follows: ampicillin (AMP, 10 µg), cefazolin (CZ, 30 µg), ampicillin/sulbactam (AAM, 10/10 µg), amoxicillin/clavulanic acid (AMC, 20/10 µg), ciprofloxacin (CIP, 5 µg), amikacin (AK, 30 μg), gentamycin (GEN, 10 µg), tobramycin (NN, 10 µg), tetracycline (TET, 30 µg), and trimethoprim/sulfamethoxazole (SXT, 1.25/23.75 µg) (Hangzhou Microbiology Reagent, Hangzhou, China). As required by CLSI, Escherichia coli ATCC 25922 served as the control strain in the experiment. The results were considered valid only when the diameter of the inhibition zone fell within the acceptable range as delineated by the CLSI guidelines. 2.4. Determination of Minimum Inhibitory Concentration (MIC) of Disinfectants The susceptibility of Salmonella isolates to three disinfectant agents was assessed using the broth micro-dilution method. The disinfectants used in this study included benzalkonium chloride (BC), benzalkonium bromide (BAB), and potassium monopersulfate triple salt (PMTS) (Macklin Biochemical Technology, Shanghai, China). Before inoculation, the bacterial suspension was adjusted to 0.5 McFarland standard with sterile saline, and the suspension was diluted 100-fold with Mueller–Hinton broth (MHB; Hangzhou Microbial Reagent, Hangzhou, China). The prepared bacterial suspension was inoculated into a 96-well plate (final volume of 200 µL) with the completed dilution of the disinfectant solution and then incubated at 37 °C for 24 h. The MIC was determined as the lowest concentration of the disinfectant at which no visible bacterial growth was detected in the 96-well plate. E. coli ATCC 25922 was used as a quality control (QC) strain for disinfectant susceptibility test. The MIC values for the isolated Salmonella strains were compared to those of the QC strain. If the MIC values of the isolated strain was higher than the MIC value of the QC strain, it was classified as disinfectant-resistant strain .
From March to April 2023, a total of 156 samples including chicken ( n = 96) and pork ( n = 60) were collected from three supermarkets and a local wet market in Pidu district of Chengdu, Sichuan Province. Each sample was weighed, labeled, and placed into separate sterile bags and then immediately transported to the laboratory at low temperatures and processed within 4 h. Pork samples were pre-treated in accordance with the National Food Safety Standard GB 4789.4-2016 . Chicken samples were pre-treated following the methods described by Hou et al. . The details of samples are shown in .
The isolation and identification of Salmonella were carried out according to GB 4789.4-2016 . Briefly, pre-treated samples were transferred to tetrathionate broth (TTB) and selenite cysteine (SC) broth (Hi-Tech Industrial Park Hope Bio-Technology, Qingdao, China) and incubated at 42 °C and 37 °C for 18–24 h, respectively. After enrichment, TTB or SC broth was picked and streaked on xylose lysine desoxycholate (XLD) agar (Hi-Tech Industrial Park Hope Bio-Technology, Qingdao, China), then incubated at 37 °C for 18 h. One or two colonies that appeared on the XLD agar and had suspicious appearance were picked from the XLD agar for further identification by double PCR assay, which would simultaneously amplify two specific genes ( invA and hut ) of Salmonella in a single PCR reaction. Briefly, synthesized primers amplifying invA gene (invA-F: 5′-GTGAAATTATCGCCACGTTCGGGCAA-3′, invA-R: 5′-TCATCGCACCGTCAAAGGAACC-3′) and hut gene (hut-F: 5′-ACTGGCGTTATCCCTTTCTCTGCTG-3′, hut-R: 5′-ATGTTGTCCTGCCCCTGGTAAGAGA-3′) were added into the PCR reaction mixture; PCR amplification (Biometra, Gottingen, Germany) was performed under the following conditions: initial denaturation at 94 °C for 5 min; 40 cycles of denaturation at 94 °C for 40 s, annealing at 60 °C for 40 s, and extension at 72 °C for 50 s; followed by a final extension at 72 °C for 5 min . Final PCR products were visualized in 1.5% agarose gel, and two bands of 284 bp and 495 bp would be observed from the positive Salmonella strains. In addition, the O and H antigens of Salmonella were serotyped by slide agglutination test using commercially available antisera following the manufacturer’s instructions (Tianrun Bio-Pharmaceutical, Ningbo, China).
After identifying Salmonella isolates, the susceptibility of all Salmonella isolates to a panel of 10 antibiotics was determined using the standard Kirby–Bauer disk diffusion method, as recommended by the Clinical and Laboratory Standards Institute (CLSI 2022) . The doses and names of the antibiotics were as follows: ampicillin (AMP, 10 µg), cefazolin (CZ, 30 µg), ampicillin/sulbactam (AAM, 10/10 µg), amoxicillin/clavulanic acid (AMC, 20/10 µg), ciprofloxacin (CIP, 5 µg), amikacin (AK, 30 μg), gentamycin (GEN, 10 µg), tobramycin (NN, 10 µg), tetracycline (TET, 30 µg), and trimethoprim/sulfamethoxazole (SXT, 1.25/23.75 µg) (Hangzhou Microbiology Reagent, Hangzhou, China). As required by CLSI, Escherichia coli ATCC 25922 served as the control strain in the experiment. The results were considered valid only when the diameter of the inhibition zone fell within the acceptable range as delineated by the CLSI guidelines.
The susceptibility of Salmonella isolates to three disinfectant agents was assessed using the broth micro-dilution method. The disinfectants used in this study included benzalkonium chloride (BC), benzalkonium bromide (BAB), and potassium monopersulfate triple salt (PMTS) (Macklin Biochemical Technology, Shanghai, China). Before inoculation, the bacterial suspension was adjusted to 0.5 McFarland standard with sterile saline, and the suspension was diluted 100-fold with Mueller–Hinton broth (MHB; Hangzhou Microbial Reagent, Hangzhou, China). The prepared bacterial suspension was inoculated into a 96-well plate (final volume of 200 µL) with the completed dilution of the disinfectant solution and then incubated at 37 °C for 24 h. The MIC was determined as the lowest concentration of the disinfectant at which no visible bacterial growth was detected in the 96-well plate. E. coli ATCC 25922 was used as a quality control (QC) strain for disinfectant susceptibility test. The MIC values for the isolated Salmonella strains were compared to those of the QC strain. If the MIC values of the isolated strain was higher than the MIC value of the QC strain, it was classified as disinfectant-resistant strain .
3.1. Prevalence and Identification of Salmonella Isolates In this study, a total of 156 samples were subjected to isolation and identification, yielding 91 positive results. Among the positive samples, 60 were from chicken and 31 were from pork. These findings led to an overall contamination rate of 58.33% (91/156), with specific rates of 69.77% (60/96) for chicken-derived Salmonella and 51.67% (31/60) for pork-derived Salmonella . Among the four sampling sites, supermarket A and supermarket C had the highest contamination rate of 61.54% (24/39). In contrast, the contamination rates of the local wet market and supermarket B were lower, at 58.97% (23/39) and 51.28% (20/39), respectively. From these 91 positive samples, 190 isolates were positive for Salmonella and confirmed by double PCR. For further identification, serotyping was performed, and the results are shown in . Overall, 151 Salmonella isolates were identified into 6 Salmonella serogroups representing 9 different Salmonella serovars, and the remaining 39 strains could not be typed. The most prominent Salmonella serogroups were E1 (102, 67.55%), B (23, 12.04%), and D1 (16, 10.60%). The main serovars were S. London (89, 58.94%), S. Typhimurium (12.58%), and S. Enteritidis (16, 10.60%), while S. Infantis was not detected in this study. 3.2. Antibiotic Resistance in Salmonella Isolates Antibiotic susceptibility was assessed after identifying the Salmonella isolates ( ). In 190 Salmonella strains, 168 (88.42%) isolates exhibited resistance to at least one antibiotic, and 150 (78.95%) isolates showed resistance to three or more antibiotics. As shown in a, among the 168 resistant strains, 4 strains were resistant to only one antibiotic (2.10%), 14 strains were resistant to two antibiotics (7.37%); 51 strains were resistant to three antibiotics (26.84%); 48 strains were resistant to four antibiotics (25.26%); 28 strains were resistant to five antibiotics (14.74%); 15 strains were resistant to six antibiotics (7.89%); 4 strains were resistant to seven antibiotics (2.11%); 3 strains were resistant to eight antibiotics (1.58%); and 1 strain was resistant to ten antibiotics (0.53%). In terms of specific antibiotics, a higher prevalence of resistance to AMP (83.16%), TET (76.31%), SXT (67.37%), and AMC (60.00%) was observed. In comparison, resistance to GEN (24.71%), CZ (20.52%), and NN (13.16%) was observed less frequently. Most strains were sensitive to AAM (3.68%), CIP (2.63%), and AK (0.52%). Notably, only five isolates in this study were found to be resistant to ciprofloxacin (CIP); three of these were identified as S. Indiana, and the remaining one was S. Kentucky. The antimicrobial susceptibility results at the primary serovars level are shown in . All strains of S. Muenster exhibited resistance to SXT, TET, CZ, GEN, and AMP. Additionally, 94.73% of S. Typhimurium displayed resistance to TET, AMP, and AMC. Among the five primary serotypes, S. Typhimurium was the most resistant serovars, with three strains exhibiting resistance to eight different antibiotics. This was succeeded by S. Muenster, which had one strain resistant to seven antibiotics. Moreover, the antimicrobial resistance profile of Salmonella isolates is displayed in . In total, we observed 33 profiles with different combinations of antibiotics, and it was dominated by SXT-TET-AMP-AMC (34/168), followed by SXT-TET-AMP (25/168), TET-AMP-AMC (14/168), and SXT-TET-GEN-AMP-AMC (14/168). 3.3. Disinfectant Resistance of Salmonella Isolates In addition to the resistance to antibiotics, resistance to disinfectant of Salmonella isolates was also investigated in this study ( ). The findings were as follows: as the QC strain, MIC values of E. coli ATCC 25922 to BC, BAB, and PMTS were determined to be 16 mg/L, 16 mg/L, and 2000 mg/L, respectively. The MIC values of the 190 isolates varied, with a range of 16–64 mg/L for BC and BAB, and from 2000 mg/L and 4000 mg/L for PMTS. The MIC value proportions of 190 isolates to three tested disinfectants are listed in . As for BAB, MIC of more than half of the isolates (61.05%, 116/190) was 64 mg/L. A MIC value of 2000 mg/L was obtained in 66.32% (126/190) of the isolates to PMTS; in addition, the MIC 50 was 2000 mg/L, and the MIC 90 was 4000 mg/L. Regarding BC, MIC of 47.89% (91/190) isolates was 32 mg/L, and MIC of 49.47% (94/190) isolates was 64 mg/mL; the MIC 50 and MIC 90 values were 32 mg/L and 64 mg/L, respectively. Furthermore, the MIC 50 and MIC 90 values for the three disinfectants were generally consistent across isolates from different regions and types, with the exception of some differences in the MIC 50 for the BC disinfectant. Specifically, the MIC 50 for chicken-derived isolates from supermarkets A, C, and D was 64 mg/L for BC, while the MIC 50 for pork-derived isolates was 32 mg/L for BC. Conversely, the MIC 50 for pork-derived isolates from supermarket B was 64 mg/L for BC and 32 mg/L for chicken. The detailed information on the MIC values of Salmonella isolates from different regions and types is given in the . Based on the MIC values obtained above, Salmonella isolates were considered “resistant to certain disinfectants” if they exhibited higher MIC values than those of the QC strain. Accordingly, 100% of the isolates in this study were found to be resistant to BAB, 97.37% were resistant to BC, and 33.6% were resistant to PMTS. In summary, the isolates in this study exhibited a higher resistance rate to BC and BAB, and a comparatively lower resistance rate to PMTS.
In this study, a total of 156 samples were subjected to isolation and identification, yielding 91 positive results. Among the positive samples, 60 were from chicken and 31 were from pork. These findings led to an overall contamination rate of 58.33% (91/156), with specific rates of 69.77% (60/96) for chicken-derived Salmonella and 51.67% (31/60) for pork-derived Salmonella . Among the four sampling sites, supermarket A and supermarket C had the highest contamination rate of 61.54% (24/39). In contrast, the contamination rates of the local wet market and supermarket B were lower, at 58.97% (23/39) and 51.28% (20/39), respectively. From these 91 positive samples, 190 isolates were positive for Salmonella and confirmed by double PCR. For further identification, serotyping was performed, and the results are shown in . Overall, 151 Salmonella isolates were identified into 6 Salmonella serogroups representing 9 different Salmonella serovars, and the remaining 39 strains could not be typed. The most prominent Salmonella serogroups were E1 (102, 67.55%), B (23, 12.04%), and D1 (16, 10.60%). The main serovars were S. London (89, 58.94%), S. Typhimurium (12.58%), and S. Enteritidis (16, 10.60%), while S. Infantis was not detected in this study.
Antibiotic susceptibility was assessed after identifying the Salmonella isolates ( ). In 190 Salmonella strains, 168 (88.42%) isolates exhibited resistance to at least one antibiotic, and 150 (78.95%) isolates showed resistance to three or more antibiotics. As shown in a, among the 168 resistant strains, 4 strains were resistant to only one antibiotic (2.10%), 14 strains were resistant to two antibiotics (7.37%); 51 strains were resistant to three antibiotics (26.84%); 48 strains were resistant to four antibiotics (25.26%); 28 strains were resistant to five antibiotics (14.74%); 15 strains were resistant to six antibiotics (7.89%); 4 strains were resistant to seven antibiotics (2.11%); 3 strains were resistant to eight antibiotics (1.58%); and 1 strain was resistant to ten antibiotics (0.53%). In terms of specific antibiotics, a higher prevalence of resistance to AMP (83.16%), TET (76.31%), SXT (67.37%), and AMC (60.00%) was observed. In comparison, resistance to GEN (24.71%), CZ (20.52%), and NN (13.16%) was observed less frequently. Most strains were sensitive to AAM (3.68%), CIP (2.63%), and AK (0.52%). Notably, only five isolates in this study were found to be resistant to ciprofloxacin (CIP); three of these were identified as S. Indiana, and the remaining one was S. Kentucky. The antimicrobial susceptibility results at the primary serovars level are shown in . All strains of S. Muenster exhibited resistance to SXT, TET, CZ, GEN, and AMP. Additionally, 94.73% of S. Typhimurium displayed resistance to TET, AMP, and AMC. Among the five primary serotypes, S. Typhimurium was the most resistant serovars, with three strains exhibiting resistance to eight different antibiotics. This was succeeded by S. Muenster, which had one strain resistant to seven antibiotics. Moreover, the antimicrobial resistance profile of Salmonella isolates is displayed in . In total, we observed 33 profiles with different combinations of antibiotics, and it was dominated by SXT-TET-AMP-AMC (34/168), followed by SXT-TET-AMP (25/168), TET-AMP-AMC (14/168), and SXT-TET-GEN-AMP-AMC (14/168).
In addition to the resistance to antibiotics, resistance to disinfectant of Salmonella isolates was also investigated in this study ( ). The findings were as follows: as the QC strain, MIC values of E. coli ATCC 25922 to BC, BAB, and PMTS were determined to be 16 mg/L, 16 mg/L, and 2000 mg/L, respectively. The MIC values of the 190 isolates varied, with a range of 16–64 mg/L for BC and BAB, and from 2000 mg/L and 4000 mg/L for PMTS. The MIC value proportions of 190 isolates to three tested disinfectants are listed in . As for BAB, MIC of more than half of the isolates (61.05%, 116/190) was 64 mg/L. A MIC value of 2000 mg/L was obtained in 66.32% (126/190) of the isolates to PMTS; in addition, the MIC 50 was 2000 mg/L, and the MIC 90 was 4000 mg/L. Regarding BC, MIC of 47.89% (91/190) isolates was 32 mg/L, and MIC of 49.47% (94/190) isolates was 64 mg/mL; the MIC 50 and MIC 90 values were 32 mg/L and 64 mg/L, respectively. Furthermore, the MIC 50 and MIC 90 values for the three disinfectants were generally consistent across isolates from different regions and types, with the exception of some differences in the MIC 50 for the BC disinfectant. Specifically, the MIC 50 for chicken-derived isolates from supermarkets A, C, and D was 64 mg/L for BC, while the MIC 50 for pork-derived isolates was 32 mg/L for BC. Conversely, the MIC 50 for pork-derived isolates from supermarket B was 64 mg/L for BC and 32 mg/L for chicken. The detailed information on the MIC values of Salmonella isolates from different regions and types is given in the . Based on the MIC values obtained above, Salmonella isolates were considered “resistant to certain disinfectants” if they exhibited higher MIC values than those of the QC strain. Accordingly, 100% of the isolates in this study were found to be resistant to BAB, 97.37% were resistant to BC, and 33.6% were resistant to PMTS. In summary, the isolates in this study exhibited a higher resistance rate to BC and BAB, and a comparatively lower resistance rate to PMTS.
In 2023, it was estimated that the incidence of Salmonella infections in China was 1295.59 per 100,000 population (95% uncertainty intervals: 1002.62, 1573.11), which was much higher than that in Europe in 2022 (15.3 per 100,000) and the United States in 2023 (13.9 per 100,000) [ , , ]. Such higher contamination rates might result from the lack of specific standards on Salmonella spp. in fresh or frozen livestock/poultry products or during slaughtering and processing in China . On the other side, the local wet markets were more common in China, and this might be an important source of Salmonella contamination. In the present study, we collected 156 meat samples including pork and chicken sources from either supermarkets or a local wet market, aiming to first investigate the prevalence of Salmonella . Overall, 91 out of 156 (58.33%) samples were detected as Salmonella -positive. Notably, a lower overall contamination rate was found in the study of Tang et al. (7.95%) and Aladi et al. (31.5%) . In different countries and farms, variations in farming practices significantly influenced Salmonella prevalence and the inadequate coverage of cold chain logistics for agricultural products also posed challenge . The centralization of meat supply in distinct production areas also played a key role in the high prevalence of Salmonella . Insufficient market regulation, possibly due to a suboptimal local retail environment, further exacerbated the issue . In addition, the relatively lower prevalence of contamination in local wet markets as compared to the other two supermarkets could be attributed to the fact that local wet markets typically sell fresh meat. This meat was often processed on-site, with slaughtering, de-hairing, and evisceration completed within 20 min. As a result, the likelihood of cross-contamination was significantly reduced . As for the contamination rate of different meat samples, a significant higher contamination rate in chicken (69.77%, 60/96) than pork (51.67%, 31/60) was revealed in our study. This was consistent with some previous reports: 42.3% for chicken and 8.9% for pork and 22.2% for chicken and 6.7% for pork . The prevalence of Salmonella was closely linked to environmental hygiene. The high rate of chicken contamination observed in this study may be attributed to the inadequate levels of disinfection in the abattoir or substandard hygiene practices. In addition to the prevalence, identifying the serovars of Salmonella isolates was also an important step to figure out potential risk strains. Different serotypes of Salmonella exhibited specific host preferences and unique geographical distribution patterns. Serotyping enabled the tracing of Salmonella infection sources and provided insights into its transmission pathways and epidemiological trends. In 190 Salmonella isolates from 91 samples, nine serovars were identified, and the dominant serotype was S. London (46.84%, 89/190), followed by S. Typhimurium (10.00%, 19/190) and S. Enteritidis (8.42%, 16/190). Li et al. analyzed sporadic diarrhea cases in China from 2014 to 2021, and found that S. London was one of the top five serotypes causing diarrhea cases. Meanwhile, S. London was one of the most common serotypes with an ACSuT (resistance to ampicillin, chloramphenicol, sulfonamide, and tetracycline) profile, which was similar to our study. The identification of serovars S. Typhimurium and S. Enteritidis in this study revealed potential food safety issue, since these two serovars were the primary Salmonella serovars involved in human infections globally over the years. Notably, the prevalence of S. Typhimurium (10.00%) and S. Enteritidis (8.42%) was higher than that found in Europe (4.00%, 7.90%) and America (2.00%, 2.00%) . In EU and the US, their national control programs and specific standard restrictions on these two Salmonella serovars for meat production decreased the prevalence. Yet in China, no such measures were conducted, thus revealing their necessity in the future. Antimicrobial resistance of pathogenic microorganisms was recognized as one of the most significant challenges to global public health. Numerous studies had focused on the antimicrobial resistance profiles of Salmonella , since its antimicrobial resistance could be transferred to the general population through Salmonella -contaminated meat. In the current study, all the isolated Salmonella strains were tested for antimicrobial susceptibility. The results showed that a high prevalence of antimicrobial resistance was observed among the Salmonella isolates, and 78.95% isolates were multi-drug-resistant, especially one Salmonella isolate was resistant to all the tested 10 antibiotics ( a). Therefore, these results indicated that the antimicrobial resistance situation of Salmonella was far from optimistic. Specifically, a high prevalence of resistance to ampicillin (83.16%), tetracycline (76.31%), and trimethoprim/sulfamethoxazole (67.37%) was identified in this study. Yang et al. found that the Salmonella isolates from poultry were resistant to ampicillin (55.3%), tetracycline (47.8%), and trimethoprim/sulfamethoxazole (31.1%). This result reminded us that these antibiotics were commonly applied in animal breeding industry and may widely cause the resistance of Salmonella . On the other hand, Salmonella isolates from human diarrhea samples were resistant to ampicillin (73.4%), tetracycline (64.1%), and trimethoprim/sulfamethoxazole (34.9%), and the resistance rates increased by year from 2014 to 2021 . Such similar antimicrobial resistance pattern of Salmonella isolated from meat and humans indicated the possibility of resistance transferring from meat to humans through the food chain. A low incidence of resistance to ampicillin/sulbactam (3.68%), ciprofloxacin (2.63%), and amikacin (0.53%) was found, which was consistent with previous research findings . The low resistance to these antibiotics may be attributed to the fact that they were currently less utilized in animal husbandry or have only recently become widely available. Specifically, ciprofloxacin, a third-generation quinolone antimicrobial agent, was the primary treatment for Salmonella infections . Fortunately, only five strains (2.63%) exhibited resistance to ciprofloxacin in the present study, and three of which were identified as S. Indiana. It has been observed that resistance to this class of antibiotics among S. Indiana is on the rise, with widespread resistant strains emerging . Studies carried on disinfectant resistance could help optimizing disinfection strategies, and also enhance the understanding of the impact of disinfectants on antibiotic resistance, thereby providing strategies to mitigate antibiotic resistance. In this study, Salmonella isolates demonstrated resistance to the three disinfectants tested, and same results were revealed before . Isolates obtained from the local wet market exhibited a lower rate of resistance to PMTS, yet displayed a similar rate of resistance to BC and BAB when compared to those from the three supermarkets. This result may be attributed to the fact that the source of meat in supermarkets was primarily from centralized abattoirs, which may involve disinfectant exposure, thereby leading to resistance to disinfectants. Resistance to both BC and BAB may be attributed to the widespread use of quaternary ammonium compounds disinfectants in recent years, which had facilitated the spread of certain resistance genes. This phenomenon warranted further investigation in the future. For different meat sources, isolates form pork exhibited higher resistance to PMTS compared to isolates from chicken, which may be a consequence of the more extensive use of PMTS in pig slaughterhouses.
In summary, S. London emerged as the predominant serovar among the Salmonella isolates obtained in this study. Meanwhile, the multi-drug-resistant S. Infantis was not detected in this study. This finding highlights that different serovars exhibit varying prevalence rates across Europe, the USA, and Asia. Additionally, the high levels of resistance to antibiotics and disinfectants observed in this study pose a potential risk to public health.
|
Public awareness of mental illness: Mental health literacy or concept creep? | 4a6aa470-1a6b-4323-b7a9-598dad873ab8 | 11804130 | Health Literacy[mh] | The distinction between MHL and concept breadth can be understood through the lens of signal detection theory. When judging whether a signal is present, four possibilities exist: A “hit” when the signal is correctly judged to be present, a “correct rejection” when it is correctly judged to be absent, a “false alarm” when it is incorrectly judged present, and a “miss” when it is incorrectly judged absent. Signal detection analysis models these judgments using two parameters. Sensitivity is the level of accuracy (i.e., the degree to which hits and correct rejections exceed misses and false alarms) and bias is the general tendency to judge the signal to be present (or absent). A judge with a liberal bias has a low threshold for detecting the signal and will therefore score many hits but also make many false alarms. A judge with a conservative bias will make fewer false alarms but also score fewer hits. MHL is akin to sensitivity, capturing how accurately a person identifies mental illness. Concept breadth, in contrast, is akin to bias. People with expansive concepts frequently judge mental illness to be present, leading to many hits but also many false alarms (i.e., over-diagnoses), whereas those with narrow concepts rarely judge it present, leading to many correct rejections but also many missed diagnoses. Sensitivity and bias are independent parameters, so every combination of high and low levels of MHL and concept breadth can occur. Several implications follow from this analysis. First, holding a broad concept of mental illness is no guarantee of accuracy. In earlier decades, when laypeople’s concepts of mental illness were overly narrow, people with broader concepts may have had greater MHL, but in the present time broad concepts can be inaccurate. Second, any factor that broadens concepts of mental illness will increase diagnostic false positives (i.e., reduce “specificity”), unless counteracted by a rise in MHL. Third, although efforts to boost MHL by raising mental health awareness may enhance accurate knowledge, they may also broaden concepts of mental illness. Increasing awareness of diagnostic concepts, for example, may induce bias rather than accuracy if people adopt low thresholds for applying them. It is not surprising that members of the public might understand mental illness in ways that are both accurate and overly expansive. Psychopathology typically falls on a spectrum and diagnosis usually relies on a set of imprecise severity or frequency criteria and subjective judgments of clinically significant distress or impairment. Laypeople may have accurate knowledge of the clinical features of a mental illness, and consequently a good capacity to recognize prototypical cases, but also have a poorly calibrated sense of the severity threshold required for a diagnosis. In this way, high MHL can co-exist with expansive concepts of mental illness. Recent research supports our contention that holding broad concepts of mental illness should be distinguished from MHL and that they have important implications. Supporting the distinctness of the factors, new psychometric scales for measuring concept breadth show that it is barely correlated with MHL, consistent with them being akin to independent accuracy and bias parameters. Supporting the role of broad concepts in overdiagnosis, research using these scales shows that people holding broad concepts are more likely to self-diagnose in the absence of a professional diagnosis than people with narrower concepts who are experiencing the same levels of distress. Groups who are especially likely to self-diagnose—young adults and political progressives—were also most likely to hold broad concepts. Holding broad concepts of mental illness has some positive implications, such as greater willingness to seek help and lesser stigma, but studies such as these suggest it also carries risks. Four overlapping risks stand out. First, because broad concepts of mental illness foster self-diagnosis, they are partly responsible for its problematic consequences. One study found that young adults who self-labelled with depression believed they had less control over their illness and coped less effectively with it than equally depressed people who did not self-diagnose, implying that broad concepts of mental illness that encourage self-labelling may undermine recovery. Second, it has recently been argued that self-diagnosis can contribute to the development of mental illness via self-fulfilling processes. For example, identifying as clinically anxious can lead people to avoid threatening situations, which can deepen and entrench their anxiety. This finding aligns with evidence that inducing people to hold a broad concept of trauma leads them to develop more posttraumatic symptoms following an unpleasant experience. Third, unwarranted self-diagnosis can set in motion unwarranted formal diagnosis. Patients may seek ratification of their self-diagnosis from a mental health professional, who may oblige out of a well-intentioned desire to secure treatment or accommodations for the patient. Fourth, broad concepts of mental illness are especially likely to bring people with relatively mild problems into treatment, and such people may be least likely to benefit from it. Indeed, recent research found that patients experiencing relatively mild anxiety or depression symptoms who accessed mental health services through Australia’s Better Access initiative were considerably more likely to get worse than to improve. Broad concepts of mental illness may therefore expose people to adverse iatrogenic effects. Increased public awareness of mental illness is a goal few would oppose. However, we should be concerned if, in addition to disseminating accurate knowledge, it is fostering expansive concepts of illness that lead people to pathologize everyday, subclinical distress. Rising awareness can be a double-edged sword, improving our MHL but also making our mental health worse by promoting identification with self-limiting and potentially self-fulfilling diagnoses. Broad concepts of mental illness may be contributing to a trend of overdiagnosis that has potentially dire implications. It is vital that psychiatrists and other mental health professionals find ways to boost accurate knowledge about mental illness without simultaneously inflating it, and that they push back against overly expansive concepts. |
Consensus Statement on the Outcome of the European Herbal Health Products Summit – Which Way Forward? | 2af6c7d2-565e-4430-b56d-d05ec95155ca | 11557087 | Pharmacology[mh] | Herbal medicinal products play a crucial role in self-medication and are, therefore, an important part of our healthcare systems, as the pandemic has impressively shown. The authorisation approval or registration and the resulting medicinal product status of the herbal preparations ensures the high pharmaceutical quality of the product and thus its safety. This is carefully and regularly monitored by national competent authorities and the responsibility of pharmaceutical companies as Marketing Authorisation Holder (MAH) . Pan-European market data from IQVIA was presented and shows increasing consumer interest in herbal (medicinal) products linked to increased and changing healthcare needs, sustainability concerns and demand for products with proven benefits . Herbal products are an important product group in the self-medication market and include registered/authorised herbal medical products, as well as food supplements containing herbal materials, commonly called ‘botanicalsʼ . As shown in IQVIAʼs presentation, one in four packages of OTC products (over-the-counter products, for self-medication) sold in Europe was a herbal product, with cough and cold being the main category of use . (Traditional) herbal medicinal products ((T)HMP) and botanicals are competitors in an evolving market. According to IQVIAʼs data, a significant increase in sales value and volume can be observed in the EUʼs herbal market, especially in the area of botanicals with numerous new products entering the market . In contrast, new herbal medicinal products have rarely been developed and cannot withstand the pressure of botanicals in terms of innovation rate and time to market, resulting in a declining number of marketing authorisations/registrations of herbal medicinal products across Europe . Compared to botanicals/supplements, herbal medicinal products fall within the scope of the general pharmaceutical legislation . Herbal medicinal products must meet all legal/regulatory requirements, including quality, safety, and efficacy. This is reviewed and approved by the national competent authority before obtaining a marketing authorisation . Herbal medicinal products have a specific indication that describes a treatment or prevention of a disease through a pharmacological, metabolic, or immunological action. Adverse reactions occurring during therapy with herbal medicinal products are reported and monitored by the European pharmacovigilance system . On the other hand, botanicals have a nutritional and/or physiological effect and must comply with food law , , . This requires compliance with specified quality parameters and a manufacturing process set by the food company, as well as appropriate labelling in terms of claims and warnings under the responsibility of the food company, considering food legislation and general information on a productʼs composition , . Botanicals on the market are randomly checked by the supervisory authorities and generally require neither an assessment and approval by relevant national competent authorities nor monitoring for adverse reactions, as is the case with herbal medicinal products , . There was agreement that awareness about the importance of education on healthy and adequate nutrition, including a focus on the composition including herbal medicinal products and food supplements, should be integrated into school curricula. In 2012, the EU Commission established an ‘on-holdʼ list of 2078 health claims of botanicals related to herbal substances in food supplements, mainly due to the lack of human intervention studies . ‘On-holdʼ health claims of botanicals – both negatively assessed or not yet reviewed – are still used on the EU market in accordance with the transitional measures set out in the Nutrition and Health Claims Regulation (NHCR) until a decision is made on the ‘on-holdʼ list . In many cases ‘on-holdʼ claims refer to the use of the herbal active substance as a medicinal plant and provide a claim describing the prevention of a disease for which the medicinal plant is used. This practice lacks a sound, science-based process. To date, 530 claims have been assessed as negatively providing consumers with false and misleading information and 1548 botanical-related claims are still on the on-hold list waiting for the Commissionʼs and Member Statesʼ final consideration. , . Health claims influence consumersʼ choices, along with other characteristics such as price or brand. Therefore, there is an urgent need for health claims to be assessed by a competent authority, providing a scientific basis consumers can rely on. Additionally, panellists agreed that ensuring appropriate safety labelling is crucial to protect consumer health. In addition, it is very important to ensure fair competition between herbal and chemically defined medicinal products. Comparing the requirements for chemically defined medicinal products and herbal medicinal products in case of changes in manufacturing and sourcing, the requirements for herbal medicinal products are higher than for medicinal products with chemically defined active ingredients . There is no scientific basis for this as the manufacturing and control processes for herbal medicinal products are robust and validated. In order to restore equal treatment for both groups, the revision of the Variation Classification Guideline should be taken as an opportunity to simplify the change processes for herbal medicinal products. For example, if a supplier of herbal raw materials changes, a simplified procedure in the Variation Classification Guideline is essential, as such changes are required more frequently due to the effects of climate change. The focus should be on minimising bureaucratic burden to ensure security of supply. The revision of the general pharmaceutical legislation and of the Variation Classification Guideline should be seen as an opportunity to secure the future of herbal medicinal products. In general, (traditional) herbal medicinal products are already well regulated by law. Efforts are being made to strengthen supply chains, manage shortages and adapt to changes. For market access, it is essential to maintain well-established use as a legal basis and application type for herbal medicinal products. Well-established use is successfully used as the legal basis for the EU monographs established by the Committee on Herbal Medicinal Products (HMPC), via a harmonised review and assessment process across the 27 member states. Therefore, the HMPC monographs form the harmonised basis for the marketing authorisations of herbal medicinal products in Europe. With respect to the revision of the general pharmaceutical legislation, panellists recommend focussing not only on harmonisation. This carries the risk of the lowest common denominator in indications and patient groups. The numerous opportunities, such as new strategies to use real-world data to enable safe access to vulnerable populations, including children, and to promote innovation, need to be explored. Better regulatory support for herbal medicinal products containing combinations of herbal drugs and their preparations was also highlighted. Botanicals should also be regulated in a more harmonised way in the EU, for example, in terms of labelling and nutrivigilance. This will enhance product transparency and protect consumer safety. In the interest of transparency for informed patients and consumers, it should be immediately clear whether it is a medicinal product or a food supplement/botanical, requiring clear labelling on the front of the package. Due to the different regulations governing medicinal products and food supplements/botanicals, understanding of potential health risks associated with botanicals is limited. To ensure and protect consumer safety in Europe, stricter compliance with harmonised quality standards, an assessment of pending health claims on botanicals, and appropriate safety information in the product information of botanicals are required. A harmonised legal food system would be an important step in the right direction. There was consensus that the market for herbal medicinal products is largely harmonised through the successful work of the HMPC, which guarantees that effective and safe herbal medicinal products of a high level of quality are available according to the same standards across Europe. The HMPC monographs provide a solid basis for the evaluation and enable market access of herbal medicinal products. Since the revision of the general pharmaceutical legislation foresees a change in the structure of the European Medicines Agency (EMA), it is vital that the quality of work and resources of the HMPC remain at the same standards. In addition, the value of herbal medicinal products needs to be given greater consideration in both education and training of staff of the national competent authorities, healthcare professionals, and the general public. Herbal medicinal productsʼ value in terms of assessed and approved quality, efficacy, and safety, as well as their potential, needs to be recognised and clearly visible to the patient. Increased support for research and development on all aspects of herbal medical products and botanicals to foster innovation is essential. Strategically, this contributes to overcoming the societal challenges with the growing demand and financial burden on the healthcare system. Here, herbal medicinal products, like all other self-medications, can play an important role in saving millions in the healthcare system. Prof. Dr. Susanne Alban, Director Institute of Pharmacy, University of Kiel Christelle Anquez-Traxler, Regulatory and Scientific Affairs Manager, Association of the European Self-Care Industry (AESGP) Prof. Dr. Ioanna Chinou, Head of Lab of Pharmacognosy and Chemistry of Natural Products, National and Kapodistrian University of Athens Dr. Hubertus Cranz, Director General, German Medicines Manufacturersʼ Association Dr. Emiel van Galen, Chair Committee on Herbal Medicinal Products (HMPC), European Medicines Agency Andreas Glück, Member of the European Parliament Dr. Jens Gobrecht, Director of the European Representation and International Affairs, Federal Union of German Associations of Pharmacists e. V. Thomas Heil, Vice-President, IQVIA Consumer Health Prof. Dr. Michael Heinrich, President GA/UCL School of Pharmacy (UK) Dr. Peter Liese, Member of the European Parliament Angela Müller, Head of Global Regulatory Affairs, Dr. Willmar Schwabe GmbH & Co.KG Dr. Bernd Roether, Head Division Drug Regulatory Affairs, Bionorica SE Julia Rumsch, Head of Brussels Office, German Pharmaceutical Industry Association (BPI) Prof. Dr. Barbara Sickmüller, President, German Society for Regulatory Affairs (DGRA) Dr. Nico Symma, Manager Herbal and Complementary Medicinal Products, German Medicines Manufacturersʼ Association Dr. Jacqueline Wiesner, Head of the Department of Herbal and Traditional Medicines, Federal Institute for Drugs and Medical Devices (BfArM) Drafting the manuscript: M. Heinrich, B. Reiken, B. Roether, A. Müller, J. Rumsch, N. Symma; critical revision of the manuscript: M. Heinrich, B. Reiken, B. Roether, A. Müller, J. Rumsch, N. Symma |
Common adolescent mental health disorders seen in Family Medicine Clinics in Ghana and Nigeria | c303767d-ff24-4534-96a3-869a93c2ca36 | 10653403 | Family Medicine[mh] | The burden of mental health disorders is on the rise globally. In 2019, one in every eight people, or 970 million people around the world were living with a mental disorder, in which anxiety and depressive disorders were the most common . In 2020, the amount of people living with anxiety and depressive disorders rose considerably because of the COVID-19 pandemic . Adolescents are group of persons with chronological age 10–19 years of age. Adolescence is a critically important stage of life for mental health and well-being of individuals, not only for the reason that this is when young people acquire autonomy, social interaction, self-control, and rapid learning, but also because the abilities and potentials formed in this period have a direct bearing on their mental health for the rest of their lives . Although adolescents generally are highly susceptible to mental health challenges, they receive very little attention, especially in developing countries . Globally, one in seven adolescents experience a mental disorder, accounting for 13% of the global burden of disease in this age group. Depression, anxiety and behavioural disorders are among the leading causes of impairment and disability among them . According to the World Health Organization, common mental disorders such as depression and anxiety account for the largest proportion of mental, developmental and substance use disorders. Behavioural disorders including attention-deficit and hyperactivity disorder plus conduct disorder are more prevalent among 10-14-year olds, while alcohol and drug use disorders more common in older adolescence (15-19-year olds) . One out of every six young Nigerian aged 15–24 is suffering from poor mental health, according to a report released by the United Nations Children Fund (UNICEF) . In Ghana, WHO estimated that 650,000 are suffering from severe mental disorder and a further 2,166,000 are suffering from moderate to mild mental disorder with a treatment gap of 98% from the total population . However, the prevalence of mental illness and its burden among adolescents is not known at the national level . There is a dearth of mental health experts in West Africa. This is worsened by the stigma associated with mental health disorders in the region. The World Health Organization (WHO) and World Organization of Family Doctors (WONCA) advocates for the integration of mental health services into primary care as the most viable way of closing the treatment gap and ensuring people get the mental health care they need . There is the need to ascertain the degree of integration of mental health into primary care in Nigeria and Ghana. This can be accessed by evaluating how commonly primary care physicians in these two countries see adolescents with mental health disorders. This will help policy makers identify the role of Family Physicians in the management of adolescent mental health disorders. It will also help identify the common mental health disorders presenting to the family medicine clinics. This could be used to increase capacity and training of family physicians in the management of adolescent mental health disorders and will on the long run reduce the burden of the disease which is currently high. The aim of this study was to identify the common mental health disorders among adolescents seen by family Physicians in Family Medicine Clinics in Nigeria and Ghana.
Study design and setting The study was a descriptive cross-sectional study conducted among 302 Family Physicians practising in Nigeria and Ghana. It is part of a larger study with some of the findings already published . The sample size was calculated to be 302 using fisher’s formula and finite correction was made based on the total population of Family Physicians in each country: 1200 and 125 for Nigeria and Ghana respectively. The sample size (302) was proportionately distributed to each country based on their population size at the time of data collection: Nigeria had 254 while Ghana had 48 Family Physicians. The study sites included General Outpatient Clinics of Teaching Hospitals, Specialist, General and District Hospitals and other Primary Healthcare Centers, where Family Physicians practice in both countries. Physicians were recruited into the study using multi-stage sampling methods. Purpose sampling was used to select two countries, Nigeria and Ghana for the study. Family Medicine clinics in both countries were identified through the Family Medicine accreditation lists of the West African College of Physicians, the postgraduate medical colleges of Nigeria and Ghana, and the Society of family Physicians of Nigeria and Ghana’s database. Simple random sampling was then used to select Family Medicine clinics across both countries from this database and all the Family Physicians in the selected clinics that met the selection criteria and gave written informed consent were recruited for the study. The questionnaire was in two parts: 1) Sociodemographic variables of the physicians themselves and 2) Data obtained from the family physicians’ clinic over six weeks. The physicians reported the number of mental health conditions seen in adolescents over the past six weeks from their clinical records. The medical records of the family medicine clinics of patients seen were reviewed by the physicians to confirm the number and distribution of adolescent mental health patients they have attended to and reported accordingly. Other details of the study design and methodology are contained in the published article . Statistical analysis Data were analysed using the Statistical Package for Social Sciences TM (IBM Corp, Armonk, NY, USA) version 22.0. They were presented in tables and were described using frequencies and percentages.
The study was a descriptive cross-sectional study conducted among 302 Family Physicians practising in Nigeria and Ghana. It is part of a larger study with some of the findings already published . The sample size was calculated to be 302 using fisher’s formula and finite correction was made based on the total population of Family Physicians in each country: 1200 and 125 for Nigeria and Ghana respectively. The sample size (302) was proportionately distributed to each country based on their population size at the time of data collection: Nigeria had 254 while Ghana had 48 Family Physicians. The study sites included General Outpatient Clinics of Teaching Hospitals, Specialist, General and District Hospitals and other Primary Healthcare Centers, where Family Physicians practice in both countries. Physicians were recruited into the study using multi-stage sampling methods. Purpose sampling was used to select two countries, Nigeria and Ghana for the study. Family Medicine clinics in both countries were identified through the Family Medicine accreditation lists of the West African College of Physicians, the postgraduate medical colleges of Nigeria and Ghana, and the Society of family Physicians of Nigeria and Ghana’s database. Simple random sampling was then used to select Family Medicine clinics across both countries from this database and all the Family Physicians in the selected clinics that met the selection criteria and gave written informed consent were recruited for the study. The questionnaire was in two parts: 1) Sociodemographic variables of the physicians themselves and 2) Data obtained from the family physicians’ clinic over six weeks. The physicians reported the number of mental health conditions seen in adolescents over the past six weeks from their clinical records. The medical records of the family medicine clinics of patients seen were reviewed by the physicians to confirm the number and distribution of adolescent mental health patients they have attended to and reported accordingly. Other details of the study design and methodology are contained in the published article .
Data were analysed using the Statistical Package for Social Sciences TM (IBM Corp, Armonk, NY, USA) version 22.0. They were presented in tables and were described using frequencies and percentages.
A total of 233 Family Physicians completed the study (77.2% response rate with a country response rate for Nigeria and Ghana of 66.1% and 135% respectively), in which 65 (27.9%) in Ghana and 168 (72.10%) in Nigeria participated in the study. They worked in facilities that were mainly in the urban setting 180 (77.25%). Majority of the facilities were Tertiary institutions 152 (65.24%), which was either Teaching Hospitals or Federal Medical Centres. The socio-demographic characteristics are shown in . shows the adolescent mental health disorders seen in Family Medicine Clinics in Ghana and Nigeria. Over 90% of Family Medicine practitioners attend to adolescents with mental health issues with over 70% of them seeing at least 2 adolescents with mental health issues every year. The burden of adolescent mental health disorders seen by Family Physicians (> 3 patients a year) in this study was 16%. The distribution of the common adolescent mental health disorders seen in Family Medicine Clinics in Ghana and Nigeria are as shown in . Depression 138 (59.23%) was the most commonly seen disorder followed by Bipolar Disorders 130 (55.79%), and Substance Use Disorders 103 (44.21%) in that order.
The study participants and their distribution have already been described in another publication from the study . Our study showed that 91% of respondents attend to adolescents with mental health issues with over half of them attending to about two to three adolescents with mental health disorders yearly. This is worrisome, it shows that the burden of adolescent mental health in primary care is enormous. If not attended to, it usually leads to major mental problems including suicide and self-harm which have been on the rise among adolescents . It calls for improved capacity in the diagnosis and management of mental health disorders among Family Physicians. This is particularly so as patients hardly present for care as they are generally stigmatized, shunned and denied access to care by their families, caregivers and the society . The burden of mental health disorders among adolescents in primary care in west Africa in this study (16%) is similar to the Global findings of 14% burden of mental health disorders among adolescents and 16% burden found by Robert et al in England . The high burden found in this study can be attributed to the large number of adolescents presenting to primary care centres as compared to specialist clinics, the fear of stigmatization of mental health disorders and the recent development of subspecialty clinics such as adolescent clinics, and geriatric clinics in General outpatient clinics . Respondents reported a variety of mental health disorders seen. The most common disorder was depression. This was followed by bipolar disorder, epilepsy and substance use disorders with the least common disorders as enuresis, Attention Deficit Hyperactivity Disorder (ADHD), Psychosis and Schizoaffective Disorders. World Health Organization (WHO) report that 12 billion work hours and 1 trillion US dollars are lost annually to depression and anxiety alone . On a global scale WHO stated in 2021 that “Depression, anxiety and behavioural disorders are among the leading causes of illness and disability among adolescents” , which is in tandem with this study having 59.23% of the respondents treating depression. However, a study done in Enugu, Nigeria had Schizophrenia as the commonest mental health disorder in Nigeria. The above study was not done in primary care setting and Providers and stakeholders had limited or no training in adolescent mental health and that could be the reason for the slight difference. The high number of depressions among adolescent can be explained by the high level of poverty and few numbers of specialist to attend to the huge burden of mental health disorder among adolescents . Bipolar disorder was the second leading mental health disorder identified in this study. This could be due to the fact that the most frequent range of onset of bipolar disorder is between the ages of 14–21 years; which falls with the adolescent and early adult age group . Substance use disorder and suicide or self-harm were also prevalent among adolescent in this study. This is similar to the findings of Birhanu et al in Ethiopia , Mavura et al in northern Tanzania , and Volkow et al in the US . The reasons may not be unconnected to the high level of peer influence, risk taking behaviour and experimentation with substances due to developmental changes and challenges in adolescence . The high burden of self-harm or suicide in this study could be due to the strong relationship between substance abuse and suicide or self-harm especially, among adolescents and young adults . There is an urgent need for Family Physicians to look out for adolescent mental health issues and address them at the early stage before they progress to more complicated forms. There is also the need for policy makers to increase awareness on the burden of mental health disorders among adolescents and put measures in place to mitigate them.
The prevalence of mental disorders among adolescents seen by Family Physicians in Family Medicine clinics are high in Nigeria and Ghana. There is the need for Family Physicians to have specialized training and retraining on mental health issues concerning adolescents. There is also a need to have more subspecialty adolescent clinics in Family Medicine Clinics to be able to handle adolescents’ challenges including recognising and treating adolescent mental health disorders easily. There is also the need for high index of suspicion for these disorders in adolescents when they present. Future studies should seek to establish the relationship between specific variables and types of mental health conditions as well as to establish the background knowledge of health workers in recognition of these conditions. Policy makers should also put measures in place to improve awareness and care for patients.
The study was conducted in two countries in West Africa. Though most Family Medicine Clinics in the region are in these two countries, the results still may not be a true representation of the entire region. Also, the study was conducted among doctors. However, relatively most primary health care centres in the region are run by primary care nurses, community health officers and community health extension workers. These categories of primary care providers were not included in the study even though they attend to most of the patients presenting to primary care facilities in these regions. There was an overall response rate of 77.2% while the country response rate for Nigeria and Ghana was 66.1% and 135% respectively. Being an online survey could explain the high non-response rate of 22.8%. A greater response was obtained from Ghanaian physicians. We thus recruited more physicians from Ghana to increase the power and compensate to some extent for the poor response from Nigeria.
S1 Checklist STROBE statement—checklist of items that should be included in reports of observational studies. (DOCX) Click here for additional data file. S1 Data (XLSX) Click here for additional data file.
|
Validation of preoperative predictor score for difficult laparascopic cholecystectomy and a modified intraoperative grading score of the difficulty of laparascopic cholecystectomy: from a resource limited setting | 28bf33f2-cbab-4a69-a796-5e2f1fc1db6d | 11771005 | Laparoscopy[mh] | Gallstones are an extremely common condition, since they have been found in the gallbladders of Egyptian mummies dating back to 1000 BC . Generally it occurs in approximately 10–20% of the adult population . In USA it has 15% rates , 9–21% in Europe and 10% in Japan . More than 80% of gallstones do not cause symptoms, and only 10% and 20% will eventually become symptomatic within 5 years and 20 years of diagnosis . Gallstones are public health problems in Ethiopia. The overall prevalence of gall stone diseases among Hospital admitted patients in referral Hospital of Ethiopia was 10.2% and it accounts for 25.9% of all Gastro Intestinal Unit admissions in Tikur Anbessa Hospital . The two commonly performed types of cholecystectomies are open cholecystectomy and laparoscopic cholecystectomy . Laparoscopic cholecystectomy (LC) since its first description in 1985 is now considered the gold standard for treatment of gall stone disease . LC has clear advantages over the traditional open approach with less postoperative pain, a lower incidence of incisional hernias, less adhesions, smaller scars/less tissue damage, a shorter hospital stay, an earlier return to full activity, a decrease in the overall cost, decreased morbidity, less pain and a quicker recovery . In countries where minimally invasive surgery is advanced, current selection criteria of patients for LC have become more liberal and the absolute contraindications for its performance are patients with uncontrolled coagulopathy, Severe chronic obstructive pulmonary disease, Congestive cardiac failure (ejection fraction < 20%) and patients who have high risk for general anesthesia . Difficult laparoscopic cholecystectomy (DLC) is stressful condition for surgeon which is accompanied by greater risk for various injuries like biliary, vascular and visceral injuries . Multiple factors that may influence the difficulty of a laparascopic cholecystectomy have been described such as age, sex, body mass index (BMI), palpable gall bladder (GB), impacted stone, anatomical variations and previous abdominal surgeries [ , – ]. Scoring a value for these factors and developing a tool that predict the difficulty of cholecystectomy can help to choose the best schedule (open or laparoscopic), select the patient according to the level of physician training or to get expert support, inform the patient of the possible difficulty and increase of complications . A number of preoperative scoring systems are reported for acute cholecystitis in well developed countries [ – ], however information regarding a separate preoperative predicting scores for only symptomatic chlelithiasis that can be applied in resource limited setups are scarce. Newly established laparascopic setups and less experienced surgeons usually start laparascopic cholecystectomy on less complicated cases like on symptomatic cholecystectomy and they need separate predictor score of difficulty for such diseases. Our preoperative predictive score for DLC for symptomatic cholelithiasis can fill this gap. There are two mostly described intraoperative scoring tools to objectively measure the difficulty of laparoscopic cholecystectomy. The first one was Gupta N et al. and Khetan et al. classification incorporating time taken to finish the laparascopic surgery, Bile/stone spillage, Injury to duct or artery and Conversion to open cholecystectomy . Different limitation of this score are noticed. Some of the variables were subjective like time taken to finish the operation may vary on surgical skills and level of experience. Moreover important operative findings that can strongly affect difficulty of operation like GB adhesion, GB distension/contraction, BMI and previous surgical scar were not included. The other operative finding score was by Sugrue et al. which incorporates GB adhesion, GB distension, BMI, previous surgery scar, puss/bile outside GB and time taken to identify cystic duct and artery . Surgue et al. score was not an original article instead it is an intraoperative score created from researches done with a purpose to produce a preoperative predictive score of DLC. Moreover important intraoperative findings that can objectively measure DLC like injury to duct/artery and bile/stone spillage were not included in the score. Our paper creates a modified scoring system to measure the difficulty of LC incorporating comprehensive intraoperative findings such as GB adhesion, presence of GB distension, BMI, adhesion from previous surgery, time taken to identify cystic duct and artery, bile/stone spillage, injury to duct/artery, conversion to open and type of ligature at laparoscopic cholecystectomy. We tried to fill the gaps of both Gupta et al/Khetan et al. and Sugrue et al. scores. The aim of this study is to define Preoperative predictor score for difficult laparascopic cholecystectomy and to establish a modified intraoperative grading score of the difficulty of laparascopic cholecystectomy.
Study area and period The study was conducted at Yekatit 12 hospital Medical College and St Paul’s Millennium Medical College, Addis Ababa, Ethiopia. Yekatit 12 Hospital Medical College serves the community for more than 100 years with current catchment population of more than five million. The college starts laparoscopic cholecystectomy for symptomatic cholelithiasis two years back. LC was being done by one laparoscopic trained General surgeon and one hepatobiliary Surgeon. St Paul’s Millennium Medical College has inpatient capacity of more than 700 beds treating an average of 1200 emergency and outpatient clients daily and two trained laparoscopic hepatobiliary surgeons involved in the LC during the study period. Mostly clips were used but when laparoscopic clips were not available extracorporeal suture ligation of the cystic duct and artery was done. The study period was from August 1, 2022 to July 30/2024. Study design This study is a prospective cross sectional, hospital based study. Because patients are contacted at a point in time when a patient is scheduled for LC we collected preoperative factors and then when operated we took intraoperative findings. There was no long term follow up of cases. Study population All patients with diagnosis of symptomatic cholelithiasis who had had laparoscopic cholecystectomy at Yekatit 12 Hospital Medical College and St Paul’s Millennium Medical College between August 1, 2022 to July 30/2024. Inclusion criteria All patients with symptomatic cholelithiasis including previous treated acute cholecystitis and gallstone pancreatitis who had had elective laparoscopic cholecystectomy at Yekatit 12 Hospital Medical College and St Paul’s Millennium Medical College between august 1, 2022 to July 30/2024. Exclusion criteria Patients with Acute cholecystitis, Gall bladder cancer. Data collection procedures The research team systematically collected data using a modified Check list questionnaires from previous studies [ , , ]. Data was collected by surgical residents. Both preoperative and intraoperative parameters like diagnosis, age, gender, BMI, palpable gall bladder, abdominal scar, impacted stone, Gall bladder appearance, distension/contraction, Adhesions from previous surgery, Time to identify cystic artery and duct, Time taken (minutes) to complete LC, Bile / stone spillage, injury to duct or artery Conversion to open were collected were filled. Data analysis procedures Data was entered in and analyzed using the Statistical Package for the Social Sciences (SPSS) version 26. Percentages and count were utilized for categorical variables. All variables with a p < 0.05 in the 95% confidence interval in bivariate analysis are entered to multivariate logistic regression model and analyzed to control for potential confounders. Results were analyzed and presented via a combination of textual, tabular and graphic formats. Operational definition Difficult laparoscopic cholecystectomy (DLC) was characterized by numerous operative difficulties (parameters) incorporating the appearance of the GB, presence of GB distension, BMI, adhesion from previous surgery, and time taken to identify cystic duct and artery. A score of < = 2 would imply mild difficulty, 3–4 moderate, 5–7 severe and 8–10 extreme (Table ). Preoperative predictors score for DLC incorporates age, gender, history of admission for acute cholecystitis, body mass index (BMI), palpable gall bladder (GB), abdominal scar and impacted stone. Score 0–2 is no risk, 3–7 is moderate risk and 8–11 is high risk (Table ). Intraoperative factors of difficult LC incorporates the appearance of the GB, presence of GB distension, BMI, adhesion from previous surgery, time taken to identify cystic duct and artery, bile/stone spillage, injury to duct/artery, conversion to open and type of ligature. A score of 0–3 would imply mild difficulty, 4–7 moderate, 8–11 severe and 12–16 extreme (Table ).
The study was conducted at Yekatit 12 hospital Medical College and St Paul’s Millennium Medical College, Addis Ababa, Ethiopia. Yekatit 12 Hospital Medical College serves the community for more than 100 years with current catchment population of more than five million. The college starts laparoscopic cholecystectomy for symptomatic cholelithiasis two years back. LC was being done by one laparoscopic trained General surgeon and one hepatobiliary Surgeon. St Paul’s Millennium Medical College has inpatient capacity of more than 700 beds treating an average of 1200 emergency and outpatient clients daily and two trained laparoscopic hepatobiliary surgeons involved in the LC during the study period. Mostly clips were used but when laparoscopic clips were not available extracorporeal suture ligation of the cystic duct and artery was done. The study period was from August 1, 2022 to July 30/2024.
This study is a prospective cross sectional, hospital based study. Because patients are contacted at a point in time when a patient is scheduled for LC we collected preoperative factors and then when operated we took intraoperative findings. There was no long term follow up of cases.
All patients with diagnosis of symptomatic cholelithiasis who had had laparoscopic cholecystectomy at Yekatit 12 Hospital Medical College and St Paul’s Millennium Medical College between August 1, 2022 to July 30/2024.
All patients with symptomatic cholelithiasis including previous treated acute cholecystitis and gallstone pancreatitis who had had elective laparoscopic cholecystectomy at Yekatit 12 Hospital Medical College and St Paul’s Millennium Medical College between august 1, 2022 to July 30/2024.
Patients with Acute cholecystitis, Gall bladder cancer.
The research team systematically collected data using a modified Check list questionnaires from previous studies [ , , ]. Data was collected by surgical residents. Both preoperative and intraoperative parameters like diagnosis, age, gender, BMI, palpable gall bladder, abdominal scar, impacted stone, Gall bladder appearance, distension/contraction, Adhesions from previous surgery, Time to identify cystic artery and duct, Time taken (minutes) to complete LC, Bile / stone spillage, injury to duct or artery Conversion to open were collected were filled.
Data was entered in and analyzed using the Statistical Package for the Social Sciences (SPSS) version 26. Percentages and count were utilized for categorical variables. All variables with a p < 0.05 in the 95% confidence interval in bivariate analysis are entered to multivariate logistic regression model and analyzed to control for potential confounders. Results were analyzed and presented via a combination of textual, tabular and graphic formats.
Difficult laparoscopic cholecystectomy (DLC) was characterized by numerous operative difficulties (parameters) incorporating the appearance of the GB, presence of GB distension, BMI, adhesion from previous surgery, and time taken to identify cystic duct and artery. A score of < = 2 would imply mild difficulty, 3–4 moderate, 5–7 severe and 8–10 extreme (Table ). Preoperative predictors score for DLC incorporates age, gender, history of admission for acute cholecystitis, body mass index (BMI), palpable gall bladder (GB), abdominal scar and impacted stone. Score 0–2 is no risk, 3–7 is moderate risk and 8–11 is high risk (Table ). Intraoperative factors of difficult LC incorporates the appearance of the GB, presence of GB distension, BMI, adhesion from previous surgery, time taken to identify cystic duct and artery, bile/stone spillage, injury to duct/artery, conversion to open and type of ligature. A score of 0–3 would imply mild difficulty, 4–7 moderate, 8–11 severe and 12–16 extreme (Table ).
Of the 200 patients included in this study 185 (92.5%) patients were female and 15(7.5%) were males. The mean age of participants was 47.3 ± 11. The majority of patients were in the age group of < 50 years ( N = 126, 63%). From the calculated BMI of patients, 76.5%( 153) were having BMI of less than or equal to 30. Those who had history of hospital admission for acute cholecystitis account 21%( N = 42). History of previous surgery was noted in 19 patients. It included infraumblical of 8% (16 patients) and supraumblical 1.5%( 3 patients). Impacted stone on imaging was noted in 30(15%) patients. Bile/stone spillage was identified in 37 (18.5%) cases which were promptly managed with saline irrigation and suction and stones picked with laparoscopic forceps (Table ). There were total 16(8%) cases converted to open in our study all because of dense adhesions at calot’s triangle (Fig. ). The LC operation outcome showed 70.5% (141) were easy and 29.5% (59) difficult (Fig. ). In the preoperative score 69%( 138), 29%( 58) and 2%( 4) were scored easy, difficult and very difficult groups respectively. For the purpose of analysis and interpretation we reorganize the preoperative score into easy and difficult. The relation between the prediction of the difficulty level of the cases preoperatively and the actual outcome of the cases is shown in (Table ). Area under receiving operating characteristics (ROC) curve = 0.948 (Fig. ). In the intraoperative score 70.5%( 141), 24.5%( 49) and 5.0%( 10) were scored easy, moderate and severe difficulty respectively. For the purpose of analysis and interpretation we then reorganize the intraoperative score into easy and difficult. The relation between the prediction of the difficulty level of the cases intraoperatively and the actual outcome of the cases is shown in (Table ). Area under receiving operating characteristics (ROC) curve = 0.94 (Fig. ). Operative outcome was correlated with the various preoperative and intraoperative factors included in the scoring system, and data analyzed first by bivariate logistic regression and those with statistical significant on bivariate analysis are fed to multivariate logistic regression to identify factors with statistical significant with outcome variable (Table ). From our data, we observed that age > 50year, Male sex, history of admission for acute cholecystitis, BMI > 30, palpable GB, impacted stone on imaging, previous abdominal surgical scar, GB appearance/adhesion, time to identify cystic artery/duct, bile/stone spillage, conversion to open cholecystectomy and type of ligature were significantly associated factors in the Bivariable analysis. However on the multivariate logistic regression analysis the risk factors for causing difficulties in laparoscopic cholecystectomy are age > = 50year( p < 0.035), history of admission for acute cholecystitis( p < 0.0001), BMI > 30( p < 0.025), palpable GB( p < 0.0001), impacted stone on imaging( p < 0.002), adhesion burying GB( p < 0.001), time to identify cystic artery/duct( p < 0.010), bile/stone spillage( p < 0.041) and type of ligature( p < 0.005).
Age is a risk factor for difficult GB surgery . In the present series, age greater than or equal to 50 years was 8 times more at risk of having difficult laparascopic cholecystectomy than those less than 50years. Male sex has been described to be associated with difficult LC . In our study, sex was not statistically associated with a high risk of difficult cholecystectomy. Obesity poses a great challenge to the safe and timely completion of the procedure due to various factors in form of difficulty umbilical port (peritoneal) access, dissection of fatty calot . In our study, we found strong correlation between BMI > 30and difficult level of laparascopic cholecystectomy ( p < 0.025). History of acute cholecystitis attacks increases scarring and fibrosis of GB as well as the adhesions at the Calot’s triangle . There is a linear correlation between previous history of hospitalization due to acute attacks of cholecystitis and the difficulty level of LC . These findings are similar to our study where history of an acute attack requiring hospitalisation was one of the main factors for difficulty in laparascopic cholecystectomy( p < 0.0001). Clinically palpable gall bladder could be due to a distended GB, mucocele GB or due to the adhesions between the GB and the omentum . Palpable GB was found to be predictor of difficult LC . Similarly in our study palpable GB was a statistically significant predictor of difficult laparoscopic cholecystectomy ( p < 0.0001). While performing LC, stone impacted at the neck of GB poses difficulty to grasp the GB neck to allow adequate retraction to perform dissection at the Calot’s triangle. It is a risk factor for DLC . It was found to be a statistically significant factor in predicting the difficulty of the procedure in our study ( p < 0.002). Previous upper abdominal surgery may cause the formation of intraperitoneal adhesions and it was found to be statistically significant factor for difficulty of LC in several studies [ , , ]. In our study 16 patients had history of infraumblical surgery and 3 cases supraumblical scar. All of the 3 supraumblical previous surgical scar had difficult LC but were statistically insignificant. Patients with adhesions burying gall bladder had high chance of being a DLC . In our study all of 11 patients with adhesion burying the GB had conversion to open and showed a statistical significant association to DLC( p < 0.001). Time needed to identify cystic artery/duct > 90 min were statistically significant association with difficulty of LC with p value < 0.010. Intra-op bile/stone spilage showed significant association with the difficulty of LC operation with p value < 0.041. In laparoscopic cholecystectomy, ligation of cystic duct and cystic artery with clips takes less time than by silk suture. Application of stitch takes statistically significant time than clip . In our study stitch had a statisticaly significant association with difficulty of LC operation( P < 0 0.005). In Our study, the preoperative scoring system has a Sensitivity of 95.5%, specificity of 96.9%, PPV of 94.1% and NPV of 97.7% and AUC of 0.948, which showed a score with high sensitivity, specificity and excellent area under ROC curve(> 0.9). Interpretation of Area under the curve (AUC): 0.9 ≤ AUC: Excellent, 0.8 ≤ AUC < 0.9: Good, 0.7 ≤ AUC < 0.8: Fair, 0.6 ≤ AUC < 0.7: Poor, 0.5 ≤ AUC < 0.6: Fail. For a diagnostic test to be meaningful, the AUC must be greater than 0.5. Generally, an AUC ≥ 0.8 is considered acceptable . Based on this our study AUC is o.959 which is excellent. Our preoperative score validity tests are comparable to study in Delhi, With sensitivity, specificity PPV and AUC of 95.74%, 73.68%, 88% 0.86 respectively (Gupta 2013), and to a study in Columbia where area under ROC curve was 0.88. The ideal cutoff was 8, with a sensitivity of 75.15%,, specificity of 88.31%,, PPV of 87.32, NPV of 76.83%, and AUC of 88 (Camilo R 2022). Our modified intraoperative measure of the difficulty of laparascopic cholecystectomy scoring system compared to Surgrue has a Sensitivity of 96.6%, specificity of 98.5%, PPV of 96.6% and NPV of 98.5% and AUC of 0.94. which showed a score with high sensitivity, specificity and excellent area under ROC curve(> 0.9).
Older age, history of admission for acute cholecystitis, Higher BMI, palpable GB and impacted stone on imaging, GB adhesion, time to identify cystic artery/duct, bile/stone spillage and type of ligature were found statistically significant factors for difficult LC. The preoperative scoring is statistically and clinically a good test for predicting the difficult level of laparascopic cholecystectomy (area under ROC = 0.948). The modified intraoperative measure of LC score is a statistically and clinically a good test for classifying the operative outcome of LC (area under ROC = 0.94). Limitation of the study Among the limitations of the study are the subjectivity of some of intraoperative findings such as Gallbladder adhesion and conversion to open. We tried to reduce it by excluding cholecystectomies done by general surgeons who are not trained laparascopic surgery. The sample size is smaller. Large sample size study is required especially for our modified intraoperative score which is less investigated even in previous studies.
Among the limitations of the study are the subjectivity of some of intraoperative findings such as Gallbladder adhesion and conversion to open. We tried to reduce it by excluding cholecystectomies done by general surgeons who are not trained laparascopic surgery. The sample size is smaller. Large sample size study is required especially for our modified intraoperative score which is less investigated even in previous studies.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Application of organoids in otolaryngology: head and neck surgery | 869f5210-f7d9-4b8b-ac10-464a2215c99c | 10942880 | Otolaryngology[mh] | Definition and characteristics of organoids Organoids, as miniature tissue and organ analogs, have three characteristics, including self-assembly, various cell types, and similar to the internal organs in structure and function to a great extent . Organoids are cultured in vitro with 3D technology, in which multicellular masses can highly simulate the physiological and pathological structure and tumor cell heterogeneity of tissues or organs in vivo . As organoids of various organs and tissues, such as intestines, stomach, liver and kidney [ – ], have been successfully cultured in vitro, the huge potential of organoid technology has been continuously developed. Organoid technology provides in vitro conditions for understanding the mechanism of development of tissues and organs, disease development and precision medicine. In addition, it can be used for drug toxicity detection, efficacy evaluation and new drug screening. However, there is still a lack of systematic synthesis of studies on organoid technology in otolaryngology, head and neck surgery. Therefore, this paper summarized the latest research results of organoids in otolaryngology—head and neck surgery and discussed the future development of organoid technology. The acquisition, construction and application of organoids are shown in Fig. . Establishment of organoid model Organoids are mainly cultured from stem cells, including pluripotent stem cells, adult stem cells and tumor stem cells. Presently, most organoid modeling methods require stem cells, matrigel and cytokine-rich medium, which can be established in an average of 10–14 days. The organoid construction process is shown in Fig. . Preparation of tissue Organoids can be produced from solid materials, such as surgical specimens, puncture biopsy specimens and nasal brush specimens, or liquids like urine, ascites and bronchoalveolar lavage fluid [ – ]. For solid specimens, the first step is to remove the non-epithelial tissue (such as muscle or fat) as much as possible, then use a scalpel to cut it into 1–3-mm small pieces, digest the tissue with enzymes and separate the epithelial cells. Inoculation of cells The isolated cells or small cell masses are inoculated into 3D extracellular matrix (ECM) hydrogels, such as basement membrane extract (BME), Matrigel or Geltrex, which can be used as artificial lamina propria . Organoid culture After inoculation, cells are supplemented with a medium consisting of a mixture of growth factors that trigger regenerative or damage responses in epithelial tissue stem cells. The key components include: (i) activators of Wnt signaling, such as Wnt ligand and LGR5 ligand R-spondin (RSPO) [ – ]; (ii) tyrosine receptor kinase ligands, such as epidermal growth factor (EGF), capable of promoting epithelial cell value addition ; and (iii) inhibitors of the transforming growth factor-β/osteogenic protein signaling pathway, such as Noggin , which induces epithelial differentiation.
Organoids, as miniature tissue and organ analogs, have three characteristics, including self-assembly, various cell types, and similar to the internal organs in structure and function to a great extent . Organoids are cultured in vitro with 3D technology, in which multicellular masses can highly simulate the physiological and pathological structure and tumor cell heterogeneity of tissues or organs in vivo . As organoids of various organs and tissues, such as intestines, stomach, liver and kidney [ – ], have been successfully cultured in vitro, the huge potential of organoid technology has been continuously developed. Organoid technology provides in vitro conditions for understanding the mechanism of development of tissues and organs, disease development and precision medicine. In addition, it can be used for drug toxicity detection, efficacy evaluation and new drug screening. However, there is still a lack of systematic synthesis of studies on organoid technology in otolaryngology, head and neck surgery. Therefore, this paper summarized the latest research results of organoids in otolaryngology—head and neck surgery and discussed the future development of organoid technology. The acquisition, construction and application of organoids are shown in Fig. .
Organoids are mainly cultured from stem cells, including pluripotent stem cells, adult stem cells and tumor stem cells. Presently, most organoid modeling methods require stem cells, matrigel and cytokine-rich medium, which can be established in an average of 10–14 days. The organoid construction process is shown in Fig. . Preparation of tissue Organoids can be produced from solid materials, such as surgical specimens, puncture biopsy specimens and nasal brush specimens, or liquids like urine, ascites and bronchoalveolar lavage fluid [ – ]. For solid specimens, the first step is to remove the non-epithelial tissue (such as muscle or fat) as much as possible, then use a scalpel to cut it into 1–3-mm small pieces, digest the tissue with enzymes and separate the epithelial cells. Inoculation of cells The isolated cells or small cell masses are inoculated into 3D extracellular matrix (ECM) hydrogels, such as basement membrane extract (BME), Matrigel or Geltrex, which can be used as artificial lamina propria . Organoid culture After inoculation, cells are supplemented with a medium consisting of a mixture of growth factors that trigger regenerative or damage responses in epithelial tissue stem cells. The key components include: (i) activators of Wnt signaling, such as Wnt ligand and LGR5 ligand R-spondin (RSPO) [ – ]; (ii) tyrosine receptor kinase ligands, such as epidermal growth factor (EGF), capable of promoting epithelial cell value addition ; and (iii) inhibitors of the transforming growth factor-β/osteogenic protein signaling pathway, such as Noggin , which induces epithelial differentiation.
Organoids can be produced from solid materials, such as surgical specimens, puncture biopsy specimens and nasal brush specimens, or liquids like urine, ascites and bronchoalveolar lavage fluid [ – ]. For solid specimens, the first step is to remove the non-epithelial tissue (such as muscle or fat) as much as possible, then use a scalpel to cut it into 1–3-mm small pieces, digest the tissue with enzymes and separate the epithelial cells.
The isolated cells or small cell masses are inoculated into 3D extracellular matrix (ECM) hydrogels, such as basement membrane extract (BME), Matrigel or Geltrex, which can be used as artificial lamina propria .
After inoculation, cells are supplemented with a medium consisting of a mixture of growth factors that trigger regenerative or damage responses in epithelial tissue stem cells. The key components include: (i) activators of Wnt signaling, such as Wnt ligand and LGR5 ligand R-spondin (RSPO) [ – ]; (ii) tyrosine receptor kinase ligands, such as epidermal growth factor (EGF), capable of promoting epithelial cell value addition ; and (iii) inhibitors of the transforming growth factor-β/osteogenic protein signaling pathway, such as Noggin , which induces epithelial differentiation.
Otology research More than 6% of the world's population suffers from hearing loss and balance impairment . Both sensory systems are located in the inner ear and can be affected by aging, genetic mutations, infections, noise exposure and ototoxic drugs. Hearing loss is irreversible, and there are currently no medications that specifically target sensory recovery. As a 3D multicellular system that simulates the inner ear in vitro, inner ear organoids are promising new tools to realize cell replacement therapy and understand the inner ear nerve cells . Culture of inner ear organoids from pluripotent stem cells Unlike other organoids, the inner ear is difficult to biopsy and grow for a long time , so patient-sourced tissue cannot be used, and using fetal-sourced tissue has ethical issues. Therefore, human pluripotent stem cells (hPSCs) may be a potential source of tissue cells for experiments. hPSCs differentiate into ear progenitor cells and more mature inner ear cells by mimicking embryonic and fetal development [ , , ]. In embryos, the development of the inner ear requires the participation of multiple cell types from multiple cell lineages, including inner ear epithelial cells, neuron cells and glial cells from the ectoderm, and periauricular mesenchymal cells from the mesoderm . The challenge is synthesizing these multicell lines into an inner ear organoid in vitro, which is a long-term bioengineering challenge. As an extremely complex organ, the inner ear is formed by integrating many signal pathways across space and time. These signals come from the inner and surrounding tissues of the epithelial cells, which make the cochlear progenitor cells differentiate into cochlear and vestibular cells. Most of our knowledge of these mechanisms comes from animal models, and very little has been done on human fetal inner ear tissue . To some extent, the self-assembly of inner ear epithelial cells and neuronal complexes can be stimulated by using recombinant proteins and small molecules to simulate signals in hPSC 3D culture . However, this approach is difficult to control, and the resulting organoids are of irregular shape and size and contain an unpredictable mix of sensory and non-sensory cells. In future studies, more sophisticated 3D bioprinting-based or microfluid-based approaches may be needed to build spatially controlled cell structures that can be affected by signal gradients to create an inner ear organoid chip. Recent studies have found that the use of microfluidics or microwell systems to enable hPSCs to form embryonic-like, renal or intestinal structures can guide studies to induce inner ear formation [ , – ]. Inner ear organoids simulate hereditary deafness It is estimated that 430 million people worldwide suffer from moderate to severe hearing loss . The most permanent hearing loss is of the sensorineural type (SNHL), and the causes include aging , infection , noise , ototoxic drug , traumatic tympanic membrane rupture and single gene mutation. Although the etiology of SNHL is largely established, its underlying pathophysiological mechanisms have not been fully elucidated at the cellular and molecular levels. Therefore, the use of inner ear organoids to model hereditary deafness is a very valuable application. There are generally two approaches to in vitro modeling of hereditary deafness. The first involves using CRISPR-Cas9 to introduce deafness-related mutations into wild embryonic stem cell (ESC) lines , guided editing or other precision genome editing techniques. The second is to obtain somatic cells from patients with inherited deafness and induce them to be transformed into induced pluripotent stem cells (iPSCs) and then gradually induce iPSCs or CrisPR-Cas9-edited ESCs to differentiate into inner ear-like tissues. The use of iPSCs clearly has greater therapeutic potential than ESCs, as the use of IPSC-derived donor cells in the treatment of the inner ear can avoid rejection. Researchers have modeled two types of autosomal recessive non-syndromic deafness, DFNB2 and DFNB3, using a 2D culture system based on hiPSC . However, the organoid-based 3D culture system can easily perform single-cell RNA sequencing (RNA-SEQ) on inner ear-like tissues. Tang et al. investigated human hearing loss caused by mutations in the gene encoding type II transmembrane protease 3 (TMPRSS3) using inner ear organoid and scRNA-seq, revealing a potential role for calcium homeostasis and extracellular matrix maintenance in TMPRSS3-associated deafness . Rhinology research Nasopharyngeal carcinoma organoid disease model The nasopharyngeal carcinoma (NPC) is a tailor-made malignant tumor with a high geographical prevalence in the top and lateral walls of the nasopharynx and is closely correlated with Epstein–Barr virus infection. For patients with NPC at the beginning of stage I, II, III and IV, the 5-year survival rate is more than 70% after comprehensive treatment with radiotherapy and chemotherapy , but 25% of NPC patients fail due to local recurrence and distant metastasis . Studies have shown that tumor stem cells are closely related to the occurrence, development, recurrence and metastasis of cancer. They can not only self-renew but also have the resistance effect to traditional chemoradiotherapy. Therefore, it is valuable to screen out chemotherapeutic drugs that NPC stem cells sensitive to. Tumor organoids are the tumor tissues of patients cultured in vitro in 3D, and tumor cells with the potential of stem cells will converge and grow spherical, thus forming organoids with the ability of self-renewal and self-organization. Compared with traditional cell lines and xenotransplantation, tumor organoids have the advantage of individualization. In addition, the tissue needs are less, and the culture cycle is shorter. It is helpful to screen chemotherapy drugs or targeted drugs accurately and efficiently . However, it does not have a vascular and immune environment, so it still has limitation when used to screen immunotherapy drugs. Patient-derived xenografts (PDXs) have been used in NPC studies, but the low success rate and high cost of PDXs limit their large-scale application . NPC tissues are usually obtained by endoscopic biopsy, and their small tissue size and poor cell activity pose major challenges in Nasopharyngeal carcinoma organoid (NPCO) culture. Wang et al. established a patient-derived organoid model, and an optimized medium can significantly increase the success rate of NPCO culture, preserve parental tumor heterogeneity and reproduce its pathophysiological features . However, there are still many challenges to generating NPCOs, including the overgrowth of fibroblasts. There are many infiltrating lymphocytes in NPC tissues . It has been reported that T and B lymphocytes can secrete cytokines to control the growth of fibroblasts . However, after several passages, the immune cells in NPCOs would stop growing and die, resulting in decreased concentrations of TGF-β and TNF-α, which could not inhibit fibroblast growth . Further studies are needed to understand how to inhibit the growth of fibroblasts to prolong the passage of NPCOs. Nasal organoid respiratory virus model Respiratory organoids are often used as an in vitro airway model to study the pathogenesis of respiratory viruses and test therapeutic methods . However, respiratory organoids technology needs to use invasive methods to obtain patient samples. Rajan et al. reported a non-invasive technique using human nasal organoids (HNOs) as an alternative to tissue-derived organoids . HNOs were cultured in an air–liquid interface (ALI), and the infection of two major human respiratory viruses, including respiratory syncytial virus (RSV) and novel coronavirus (SARS-CoV-2), was evaluated, reproducing the complex host-virus interaction. SARS-CoV-2 causes severe damage to cilia and epithelial cells [ – ], no interferon-I response and little mucus secretion. In contrast, RSV causes mucous hypersecretion and severe interferon-I response with ciliary body damage. Chiu et al. also reported the establishment of a nasal organoid model to study the infection of SARS-CoV-2. They further reproduced the higher infectability and replication adaptability of the Omicron variant showing its pathogenesis, such as the destruction of ciliary cells and tight connections, to promote the spread and development of the virus . The nasal organoid respiratory virus model, which simulates upper respiratory tract infections and effectively reconstructs human nasal epithelium in a stable culture plate, provides microbiologists with a powerful and convenient tool to study the pathogenesis and test treatments for the current epidemic of SARS-CoV-2 and its emerging variants. Pharyngology, head and neck surgery research Head and neck squamous cell carcinoma prediction model Head and neck malignancies are the seventh most common tumor types worldwide, among which more than 90% are . The emergence of therapies, including targeted therapies and immunotherapies, is increasing the need to test treatment options in personalized settings. Currently, new treatments for HNSCC are mainly tested at the population level, meaning that within a group of patients, multiple subgroups with different efficacy and side effects are included. This makes it difficult to predict how well a therapy will work for an individual patient. Consequently, new therapies are often only tested as palliative treatments in patients with advanced HNSCC. HNSCC organoids fill the gap in personalized prediction of treatment outcomes. In a specific culture environment, tumor tissue can be grown into organoid models to test different treatments, predict treatment outcomes, and then treat patients. In addition, it could allow new drugs to be tested before entering clinical practice. Tissue-derived thyroid organoid model Total thyroidectomy as a treatment for thyroid cancer can cause hypothyroidism and require patients to take thyroid hormones for life. Ogundipe et al. isolated cells from mouse and human thyroid tissue and developed an in vitro 3D culture system . It was demonstrated that both mouse and human thyroid cells could be isolated, expanded in vitro and cultured for a long time. Furthermore, these cells were able to self-renew and differentiate in vitro, suggesting that these cells had the proliferative capacity required for expansion. After transplanting a few cells, these organoids formed fully functioning hormone-producing thyroid follicles in mice with hypothyroidism. There are two important aspects of thyroid organoid culture that warrant further investigation with this technique. Firstly, organoids can be amplified while maintaining genetic stability , proving that they have certain safety as grafts. Secondly, thyroid organoids of patients who have undergone radical surgery for thyroid cancer can be generated from cryo-stored stem cells, which can come from bone marrow or adipose tissue .
More than 6% of the world's population suffers from hearing loss and balance impairment . Both sensory systems are located in the inner ear and can be affected by aging, genetic mutations, infections, noise exposure and ototoxic drugs. Hearing loss is irreversible, and there are currently no medications that specifically target sensory recovery. As a 3D multicellular system that simulates the inner ear in vitro, inner ear organoids are promising new tools to realize cell replacement therapy and understand the inner ear nerve cells . Culture of inner ear organoids from pluripotent stem cells Unlike other organoids, the inner ear is difficult to biopsy and grow for a long time , so patient-sourced tissue cannot be used, and using fetal-sourced tissue has ethical issues. Therefore, human pluripotent stem cells (hPSCs) may be a potential source of tissue cells for experiments. hPSCs differentiate into ear progenitor cells and more mature inner ear cells by mimicking embryonic and fetal development [ , , ]. In embryos, the development of the inner ear requires the participation of multiple cell types from multiple cell lineages, including inner ear epithelial cells, neuron cells and glial cells from the ectoderm, and periauricular mesenchymal cells from the mesoderm . The challenge is synthesizing these multicell lines into an inner ear organoid in vitro, which is a long-term bioengineering challenge. As an extremely complex organ, the inner ear is formed by integrating many signal pathways across space and time. These signals come from the inner and surrounding tissues of the epithelial cells, which make the cochlear progenitor cells differentiate into cochlear and vestibular cells. Most of our knowledge of these mechanisms comes from animal models, and very little has been done on human fetal inner ear tissue . To some extent, the self-assembly of inner ear epithelial cells and neuronal complexes can be stimulated by using recombinant proteins and small molecules to simulate signals in hPSC 3D culture . However, this approach is difficult to control, and the resulting organoids are of irregular shape and size and contain an unpredictable mix of sensory and non-sensory cells. In future studies, more sophisticated 3D bioprinting-based or microfluid-based approaches may be needed to build spatially controlled cell structures that can be affected by signal gradients to create an inner ear organoid chip. Recent studies have found that the use of microfluidics or microwell systems to enable hPSCs to form embryonic-like, renal or intestinal structures can guide studies to induce inner ear formation [ , – ]. Inner ear organoids simulate hereditary deafness It is estimated that 430 million people worldwide suffer from moderate to severe hearing loss . The most permanent hearing loss is of the sensorineural type (SNHL), and the causes include aging , infection , noise , ototoxic drug , traumatic tympanic membrane rupture and single gene mutation. Although the etiology of SNHL is largely established, its underlying pathophysiological mechanisms have not been fully elucidated at the cellular and molecular levels. Therefore, the use of inner ear organoids to model hereditary deafness is a very valuable application. There are generally two approaches to in vitro modeling of hereditary deafness. The first involves using CRISPR-Cas9 to introduce deafness-related mutations into wild embryonic stem cell (ESC) lines , guided editing or other precision genome editing techniques. The second is to obtain somatic cells from patients with inherited deafness and induce them to be transformed into induced pluripotent stem cells (iPSCs) and then gradually induce iPSCs or CrisPR-Cas9-edited ESCs to differentiate into inner ear-like tissues. The use of iPSCs clearly has greater therapeutic potential than ESCs, as the use of IPSC-derived donor cells in the treatment of the inner ear can avoid rejection. Researchers have modeled two types of autosomal recessive non-syndromic deafness, DFNB2 and DFNB3, using a 2D culture system based on hiPSC . However, the organoid-based 3D culture system can easily perform single-cell RNA sequencing (RNA-SEQ) on inner ear-like tissues. Tang et al. investigated human hearing loss caused by mutations in the gene encoding type II transmembrane protease 3 (TMPRSS3) using inner ear organoid and scRNA-seq, revealing a potential role for calcium homeostasis and extracellular matrix maintenance in TMPRSS3-associated deafness .
Unlike other organoids, the inner ear is difficult to biopsy and grow for a long time , so patient-sourced tissue cannot be used, and using fetal-sourced tissue has ethical issues. Therefore, human pluripotent stem cells (hPSCs) may be a potential source of tissue cells for experiments. hPSCs differentiate into ear progenitor cells and more mature inner ear cells by mimicking embryonic and fetal development [ , , ]. In embryos, the development of the inner ear requires the participation of multiple cell types from multiple cell lineages, including inner ear epithelial cells, neuron cells and glial cells from the ectoderm, and periauricular mesenchymal cells from the mesoderm . The challenge is synthesizing these multicell lines into an inner ear organoid in vitro, which is a long-term bioengineering challenge. As an extremely complex organ, the inner ear is formed by integrating many signal pathways across space and time. These signals come from the inner and surrounding tissues of the epithelial cells, which make the cochlear progenitor cells differentiate into cochlear and vestibular cells. Most of our knowledge of these mechanisms comes from animal models, and very little has been done on human fetal inner ear tissue . To some extent, the self-assembly of inner ear epithelial cells and neuronal complexes can be stimulated by using recombinant proteins and small molecules to simulate signals in hPSC 3D culture . However, this approach is difficult to control, and the resulting organoids are of irregular shape and size and contain an unpredictable mix of sensory and non-sensory cells. In future studies, more sophisticated 3D bioprinting-based or microfluid-based approaches may be needed to build spatially controlled cell structures that can be affected by signal gradients to create an inner ear organoid chip. Recent studies have found that the use of microfluidics or microwell systems to enable hPSCs to form embryonic-like, renal or intestinal structures can guide studies to induce inner ear formation [ , – ].
It is estimated that 430 million people worldwide suffer from moderate to severe hearing loss . The most permanent hearing loss is of the sensorineural type (SNHL), and the causes include aging , infection , noise , ototoxic drug , traumatic tympanic membrane rupture and single gene mutation. Although the etiology of SNHL is largely established, its underlying pathophysiological mechanisms have not been fully elucidated at the cellular and molecular levels. Therefore, the use of inner ear organoids to model hereditary deafness is a very valuable application. There are generally two approaches to in vitro modeling of hereditary deafness. The first involves using CRISPR-Cas9 to introduce deafness-related mutations into wild embryonic stem cell (ESC) lines , guided editing or other precision genome editing techniques. The second is to obtain somatic cells from patients with inherited deafness and induce them to be transformed into induced pluripotent stem cells (iPSCs) and then gradually induce iPSCs or CrisPR-Cas9-edited ESCs to differentiate into inner ear-like tissues. The use of iPSCs clearly has greater therapeutic potential than ESCs, as the use of IPSC-derived donor cells in the treatment of the inner ear can avoid rejection. Researchers have modeled two types of autosomal recessive non-syndromic deafness, DFNB2 and DFNB3, using a 2D culture system based on hiPSC . However, the organoid-based 3D culture system can easily perform single-cell RNA sequencing (RNA-SEQ) on inner ear-like tissues. Tang et al. investigated human hearing loss caused by mutations in the gene encoding type II transmembrane protease 3 (TMPRSS3) using inner ear organoid and scRNA-seq, revealing a potential role for calcium homeostasis and extracellular matrix maintenance in TMPRSS3-associated deafness .
Nasopharyngeal carcinoma organoid disease model The nasopharyngeal carcinoma (NPC) is a tailor-made malignant tumor with a high geographical prevalence in the top and lateral walls of the nasopharynx and is closely correlated with Epstein–Barr virus infection. For patients with NPC at the beginning of stage I, II, III and IV, the 5-year survival rate is more than 70% after comprehensive treatment with radiotherapy and chemotherapy , but 25% of NPC patients fail due to local recurrence and distant metastasis . Studies have shown that tumor stem cells are closely related to the occurrence, development, recurrence and metastasis of cancer. They can not only self-renew but also have the resistance effect to traditional chemoradiotherapy. Therefore, it is valuable to screen out chemotherapeutic drugs that NPC stem cells sensitive to. Tumor organoids are the tumor tissues of patients cultured in vitro in 3D, and tumor cells with the potential of stem cells will converge and grow spherical, thus forming organoids with the ability of self-renewal and self-organization. Compared with traditional cell lines and xenotransplantation, tumor organoids have the advantage of individualization. In addition, the tissue needs are less, and the culture cycle is shorter. It is helpful to screen chemotherapy drugs or targeted drugs accurately and efficiently . However, it does not have a vascular and immune environment, so it still has limitation when used to screen immunotherapy drugs. Patient-derived xenografts (PDXs) have been used in NPC studies, but the low success rate and high cost of PDXs limit their large-scale application . NPC tissues are usually obtained by endoscopic biopsy, and their small tissue size and poor cell activity pose major challenges in Nasopharyngeal carcinoma organoid (NPCO) culture. Wang et al. established a patient-derived organoid model, and an optimized medium can significantly increase the success rate of NPCO culture, preserve parental tumor heterogeneity and reproduce its pathophysiological features . However, there are still many challenges to generating NPCOs, including the overgrowth of fibroblasts. There are many infiltrating lymphocytes in NPC tissues . It has been reported that T and B lymphocytes can secrete cytokines to control the growth of fibroblasts . However, after several passages, the immune cells in NPCOs would stop growing and die, resulting in decreased concentrations of TGF-β and TNF-α, which could not inhibit fibroblast growth . Further studies are needed to understand how to inhibit the growth of fibroblasts to prolong the passage of NPCOs. Nasal organoid respiratory virus model Respiratory organoids are often used as an in vitro airway model to study the pathogenesis of respiratory viruses and test therapeutic methods . However, respiratory organoids technology needs to use invasive methods to obtain patient samples. Rajan et al. reported a non-invasive technique using human nasal organoids (HNOs) as an alternative to tissue-derived organoids . HNOs were cultured in an air–liquid interface (ALI), and the infection of two major human respiratory viruses, including respiratory syncytial virus (RSV) and novel coronavirus (SARS-CoV-2), was evaluated, reproducing the complex host-virus interaction. SARS-CoV-2 causes severe damage to cilia and epithelial cells [ – ], no interferon-I response and little mucus secretion. In contrast, RSV causes mucous hypersecretion and severe interferon-I response with ciliary body damage. Chiu et al. also reported the establishment of a nasal organoid model to study the infection of SARS-CoV-2. They further reproduced the higher infectability and replication adaptability of the Omicron variant showing its pathogenesis, such as the destruction of ciliary cells and tight connections, to promote the spread and development of the virus . The nasal organoid respiratory virus model, which simulates upper respiratory tract infections and effectively reconstructs human nasal epithelium in a stable culture plate, provides microbiologists with a powerful and convenient tool to study the pathogenesis and test treatments for the current epidemic of SARS-CoV-2 and its emerging variants.
The nasopharyngeal carcinoma (NPC) is a tailor-made malignant tumor with a high geographical prevalence in the top and lateral walls of the nasopharynx and is closely correlated with Epstein–Barr virus infection. For patients with NPC at the beginning of stage I, II, III and IV, the 5-year survival rate is more than 70% after comprehensive treatment with radiotherapy and chemotherapy , but 25% of NPC patients fail due to local recurrence and distant metastasis . Studies have shown that tumor stem cells are closely related to the occurrence, development, recurrence and metastasis of cancer. They can not only self-renew but also have the resistance effect to traditional chemoradiotherapy. Therefore, it is valuable to screen out chemotherapeutic drugs that NPC stem cells sensitive to. Tumor organoids are the tumor tissues of patients cultured in vitro in 3D, and tumor cells with the potential of stem cells will converge and grow spherical, thus forming organoids with the ability of self-renewal and self-organization. Compared with traditional cell lines and xenotransplantation, tumor organoids have the advantage of individualization. In addition, the tissue needs are less, and the culture cycle is shorter. It is helpful to screen chemotherapy drugs or targeted drugs accurately and efficiently . However, it does not have a vascular and immune environment, so it still has limitation when used to screen immunotherapy drugs. Patient-derived xenografts (PDXs) have been used in NPC studies, but the low success rate and high cost of PDXs limit their large-scale application . NPC tissues are usually obtained by endoscopic biopsy, and their small tissue size and poor cell activity pose major challenges in Nasopharyngeal carcinoma organoid (NPCO) culture. Wang et al. established a patient-derived organoid model, and an optimized medium can significantly increase the success rate of NPCO culture, preserve parental tumor heterogeneity and reproduce its pathophysiological features . However, there are still many challenges to generating NPCOs, including the overgrowth of fibroblasts. There are many infiltrating lymphocytes in NPC tissues . It has been reported that T and B lymphocytes can secrete cytokines to control the growth of fibroblasts . However, after several passages, the immune cells in NPCOs would stop growing and die, resulting in decreased concentrations of TGF-β and TNF-α, which could not inhibit fibroblast growth . Further studies are needed to understand how to inhibit the growth of fibroblasts to prolong the passage of NPCOs.
Respiratory organoids are often used as an in vitro airway model to study the pathogenesis of respiratory viruses and test therapeutic methods . However, respiratory organoids technology needs to use invasive methods to obtain patient samples. Rajan et al. reported a non-invasive technique using human nasal organoids (HNOs) as an alternative to tissue-derived organoids . HNOs were cultured in an air–liquid interface (ALI), and the infection of two major human respiratory viruses, including respiratory syncytial virus (RSV) and novel coronavirus (SARS-CoV-2), was evaluated, reproducing the complex host-virus interaction. SARS-CoV-2 causes severe damage to cilia and epithelial cells [ – ], no interferon-I response and little mucus secretion. In contrast, RSV causes mucous hypersecretion and severe interferon-I response with ciliary body damage. Chiu et al. also reported the establishment of a nasal organoid model to study the infection of SARS-CoV-2. They further reproduced the higher infectability and replication adaptability of the Omicron variant showing its pathogenesis, such as the destruction of ciliary cells and tight connections, to promote the spread and development of the virus . The nasal organoid respiratory virus model, which simulates upper respiratory tract infections and effectively reconstructs human nasal epithelium in a stable culture plate, provides microbiologists with a powerful and convenient tool to study the pathogenesis and test treatments for the current epidemic of SARS-CoV-2 and its emerging variants.
Head and neck squamous cell carcinoma prediction model Head and neck malignancies are the seventh most common tumor types worldwide, among which more than 90% are . The emergence of therapies, including targeted therapies and immunotherapies, is increasing the need to test treatment options in personalized settings. Currently, new treatments for HNSCC are mainly tested at the population level, meaning that within a group of patients, multiple subgroups with different efficacy and side effects are included. This makes it difficult to predict how well a therapy will work for an individual patient. Consequently, new therapies are often only tested as palliative treatments in patients with advanced HNSCC. HNSCC organoids fill the gap in personalized prediction of treatment outcomes. In a specific culture environment, tumor tissue can be grown into organoid models to test different treatments, predict treatment outcomes, and then treat patients. In addition, it could allow new drugs to be tested before entering clinical practice. Tissue-derived thyroid organoid model Total thyroidectomy as a treatment for thyroid cancer can cause hypothyroidism and require patients to take thyroid hormones for life. Ogundipe et al. isolated cells from mouse and human thyroid tissue and developed an in vitro 3D culture system . It was demonstrated that both mouse and human thyroid cells could be isolated, expanded in vitro and cultured for a long time. Furthermore, these cells were able to self-renew and differentiate in vitro, suggesting that these cells had the proliferative capacity required for expansion. After transplanting a few cells, these organoids formed fully functioning hormone-producing thyroid follicles in mice with hypothyroidism. There are two important aspects of thyroid organoid culture that warrant further investigation with this technique. Firstly, organoids can be amplified while maintaining genetic stability , proving that they have certain safety as grafts. Secondly, thyroid organoids of patients who have undergone radical surgery for thyroid cancer can be generated from cryo-stored stem cells, which can come from bone marrow or adipose tissue .
Head and neck malignancies are the seventh most common tumor types worldwide, among which more than 90% are . The emergence of therapies, including targeted therapies and immunotherapies, is increasing the need to test treatment options in personalized settings. Currently, new treatments for HNSCC are mainly tested at the population level, meaning that within a group of patients, multiple subgroups with different efficacy and side effects are included. This makes it difficult to predict how well a therapy will work for an individual patient. Consequently, new therapies are often only tested as palliative treatments in patients with advanced HNSCC. HNSCC organoids fill the gap in personalized prediction of treatment outcomes. In a specific culture environment, tumor tissue can be grown into organoid models to test different treatments, predict treatment outcomes, and then treat patients. In addition, it could allow new drugs to be tested before entering clinical practice.
Total thyroidectomy as a treatment for thyroid cancer can cause hypothyroidism and require patients to take thyroid hormones for life. Ogundipe et al. isolated cells from mouse and human thyroid tissue and developed an in vitro 3D culture system . It was demonstrated that both mouse and human thyroid cells could be isolated, expanded in vitro and cultured for a long time. Furthermore, these cells were able to self-renew and differentiate in vitro, suggesting that these cells had the proliferative capacity required for expansion. After transplanting a few cells, these organoids formed fully functioning hormone-producing thyroid follicles in mice with hypothyroidism. There are two important aspects of thyroid organoid culture that warrant further investigation with this technique. Firstly, organoids can be amplified while maintaining genetic stability , proving that they have certain safety as grafts. Secondly, thyroid organoids of patients who have undergone radical surgery for thyroid cancer can be generated from cryo-stored stem cells, which can come from bone marrow or adipose tissue .
The establishment of various organoid models provides new models in vitro for drug research, disease research and organ replacement therapy, with great potential. However, its research in otolaryngology—head and neck surgery is still at an early stage, and its application is relatively limited. Future studies should consider expanding the application scope of this technology, including providing personalized drug treatment for various tumors through tumor organoid technology. Furthermore, by combining organoid technology with 3D printing technology, organoid technology is more closely associated with the regeneration of various organ tissues. It is believed that further development of otolaryngology—head and neck organoid culture systems will have a huge prospect in the future of basic research and translational medicine.
|
Conduction Disturbances and Outcome After Surgical Aortic Valve Replacement in Patients With Bicuspid and Tricuspid Aortic Stenosis | 055fec43-f5ed-409e-a3af-482809f69088 | 11789612 | Surgical Procedures, Operative[mh] | This is the first study to investigate the incidence and prognostic relevance of new-onset cardiac conduction disturbances after surgical aortic valve replacement separately in bicuspid and tricuspid aortic valve patients with aortic stenosis. Despite the younger age and lower prevalence of cardiovascular risk factors, bicuspid aortic valve aortic stenosis patients had a markedly elevated risk of permanent pacemaker implantation and new-onset left bundle-branch block compared with tricuspid aortic valve aortic stenosis patients, in particular bicuspid aortic valve aortic stenosis patients with fusion of the right- and non-coronary cusps. New-onset left bundle-branch block after surgical aortic valve replacement was associated with an increased all-cause mortality during follow-up.
The findings of this study underscore the potential benefit of a preoperative cardiac multidetector computed tomography for determination of aortic valve morphology to optimize risk assessment and patient management. Because of the prognostic relevance of new-onset left bundle-branch block after surgical aortic valve replacement, close monitoring of these patients might be necessary to prevent serious complications during follow-up.
Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Study Design and Study Population In this observational study, all patients with severe AS who underwent primary SAVR at Uppsala University Hospital (Uppsala, Sweden) between January 1, 2005, and December 31, 2022, were eligible for inclusion. The patients were identified from the institutional database, which contains prospectively collected data. Patients with a history of coronary artery disease, previous open-heart surgery, previous percutaneous atrial fibrillation or atrial flutter ablation, coexisting moderate or severe aortic regurgitation, coexisting moderate or severe mitral stenosis or regurgitation, and concomitant surgical procedures other than ascending aorta surgery were not eligible for inclusion in the study. All patients with a preoperative conduction disorder (permanent pacemaker, LBBB or right bundle-branch block) were excluded, as were patients with a missing preoperative or postoperative ECG. Patients who died during the procedure were also excluded. The inclusion process is illustrated in Figure . The study was approved by the Regional Ethics Review Committee and compiled with the Declaration of Helsinki. The need for informed consent was waived. The reporting of this study conforms to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement. Definition of Aortic Valve Morphology Patients were stratified according to aortic valve morphology (ie, BAV or TAV). The aortic valve morphology was determined from the surgeon’s visual description, which was documented in the medical records. Further categorization of the BAV morphology according to the Sievers and Schmidtke classification system was possible in 307 BAV-AS patients. Definition of New-Onset Conduction Disturbances Baseline ECGs were obtained during the preoperative assessment, which occurred ≤2 weeks before SAVR. All patients had ≥1 postoperative ECG, obtained on the third postoperative day per institution routine. Additional ECGs could be obtained before or after the third postoperative day if deemed clinically indicated. New-onset conduction disturbances were defined as either a new-onset LBBB or a new-onset third-degree AV block during the index hospitalization associated with SAVR, we did not include new-onset conduction disturbances after discharge. New-onset LBBB was defined according to recommendations. , Third-degree AV block was defined as the absence of AV nodal conduction that did not resolve during the postoperative period, ultimately resulting in a permanent pacemaker implantation during the index hospitalization. The decision of permanent pacemaker implantation was at the discretion of the responsible surgeon, after discussion with an electrophysiologist. Transient third-degree AV block events that did not develop into a permanent pacemaker need were not considered a new-onset conduction disorder. End Points The primary outcomes of interest were incidence of new-onset third-degree AV block or new-onset LBBB in BAV-AS and TAV-AS patients after SAVR during the index hospitalization. We also investigated whether new-onset conduction disturbances were associated with all-cause mortality during follow-up. The start of follow-up was set to the end of the index SAVR, as we considered all new-onset conduction disturbances to be associated with the surgical trauma. Information on mortality was obtained from the institutional database, which is linked to the Swedish Population Register. Follow-up was administratively censored on December 31, 2023. In a subgroup analysis, we specifically aimed to assess the incidence of new-onset conduction disturbances according to bicuspid aortic valve morphology stratified according to the Sievers and Schmidtke classification. Statistical Analysis Statistical analyses were performed in R version 3.3.1. Two-tailed P values <0.05 were considered statistically significant. Continuous data are presented as mean±SD, whereas categorical data are reported in frequencies with percentages. For comparison of continuous baseline variables between BAV and TAV patients, the independent-samples t test was used. For comparison of categorical baselines variables between BAV and TAV patients, the χ 2 test or the Fisher exact test was used as appropriate. Comparison of continuous variables between >2 groups was performed with the use of 1-way ANOVA, followed by post hoc Bonferroni tests. The association between aortic valve morphology and postoperative conduction disorders was investigated with the use of univariable and multivariable logistic regression models. Results are presented as crude odds ratios (ORs) and adjusted OR (aOR) with 95% CI, respectively. Adjustments were made for age, sex, preoperative left ventricular ejection fraction (LVEF), and size of the prosthetic aortic valve, based on a directed acyclic graph ( Figure S1 ). Using a multivariable Cox proportional hazards regression analysis, it was investigated whether new-onset third-degree AV block and new-onset LBBB were associated with all-cause mortality after SAVR. Adjustments were made for age, sex, preoperative LVEF, and aortic valve morphology (BAV or TAV), based on a directed acyclic graph ( Figure S2 ). Interaction was assessed through the introduction of a multiplicative interaction term. We also conducted a stratified Cox proportional hazards regression analysis by aortic valve morphology to account for the inherent heterogeneity between the BAV-AS and TAV-AS groups. The assumption of proportional hazards was tested with the use of a 2-sided Schoenfeld residuals test, and there were no signs of violation of the proportional hazards assumption. The results from the Cox proportional hazards regression analysis are reported as crude hazard ratios (HR) and adjusted hazard ratios (aHRs) with 95% CI. The survival probability in the respective groups during follow-up after SAVR is graphically illustrated in Kaplan–Meier survival curves. In the subgroup analysis, univariable and multivariable logistic regression analyses were conducted to investigate the association between new-onset conduction disturbances and BAV subcategory. We only investigated subgroups with an event. The TAV cohort (n=558) served as a reference group. Adjustments were made for age, sex, preoperative LVEF, and size of the prosthetic aortic valve. Results are presented as crude OR and aOR with 95% CI.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
In this observational study, all patients with severe AS who underwent primary SAVR at Uppsala University Hospital (Uppsala, Sweden) between January 1, 2005, and December 31, 2022, were eligible for inclusion. The patients were identified from the institutional database, which contains prospectively collected data. Patients with a history of coronary artery disease, previous open-heart surgery, previous percutaneous atrial fibrillation or atrial flutter ablation, coexisting moderate or severe aortic regurgitation, coexisting moderate or severe mitral stenosis or regurgitation, and concomitant surgical procedures other than ascending aorta surgery were not eligible for inclusion in the study. All patients with a preoperative conduction disorder (permanent pacemaker, LBBB or right bundle-branch block) were excluded, as were patients with a missing preoperative or postoperative ECG. Patients who died during the procedure were also excluded. The inclusion process is illustrated in Figure . The study was approved by the Regional Ethics Review Committee and compiled with the Declaration of Helsinki. The need for informed consent was waived. The reporting of this study conforms to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement.
Patients were stratified according to aortic valve morphology (ie, BAV or TAV). The aortic valve morphology was determined from the surgeon’s visual description, which was documented in the medical records. Further categorization of the BAV morphology according to the Sievers and Schmidtke classification system was possible in 307 BAV-AS patients.
Baseline ECGs were obtained during the preoperative assessment, which occurred ≤2 weeks before SAVR. All patients had ≥1 postoperative ECG, obtained on the third postoperative day per institution routine. Additional ECGs could be obtained before or after the third postoperative day if deemed clinically indicated. New-onset conduction disturbances were defined as either a new-onset LBBB or a new-onset third-degree AV block during the index hospitalization associated with SAVR, we did not include new-onset conduction disturbances after discharge. New-onset LBBB was defined according to recommendations. , Third-degree AV block was defined as the absence of AV nodal conduction that did not resolve during the postoperative period, ultimately resulting in a permanent pacemaker implantation during the index hospitalization. The decision of permanent pacemaker implantation was at the discretion of the responsible surgeon, after discussion with an electrophysiologist. Transient third-degree AV block events that did not develop into a permanent pacemaker need were not considered a new-onset conduction disorder.
The primary outcomes of interest were incidence of new-onset third-degree AV block or new-onset LBBB in BAV-AS and TAV-AS patients after SAVR during the index hospitalization. We also investigated whether new-onset conduction disturbances were associated with all-cause mortality during follow-up. The start of follow-up was set to the end of the index SAVR, as we considered all new-onset conduction disturbances to be associated with the surgical trauma. Information on mortality was obtained from the institutional database, which is linked to the Swedish Population Register. Follow-up was administratively censored on December 31, 2023. In a subgroup analysis, we specifically aimed to assess the incidence of new-onset conduction disturbances according to bicuspid aortic valve morphology stratified according to the Sievers and Schmidtke classification.
Statistical analyses were performed in R version 3.3.1. Two-tailed P values <0.05 were considered statistically significant. Continuous data are presented as mean±SD, whereas categorical data are reported in frequencies with percentages. For comparison of continuous baseline variables between BAV and TAV patients, the independent-samples t test was used. For comparison of categorical baselines variables between BAV and TAV patients, the χ 2 test or the Fisher exact test was used as appropriate. Comparison of continuous variables between >2 groups was performed with the use of 1-way ANOVA, followed by post hoc Bonferroni tests. The association between aortic valve morphology and postoperative conduction disorders was investigated with the use of univariable and multivariable logistic regression models. Results are presented as crude odds ratios (ORs) and adjusted OR (aOR) with 95% CI, respectively. Adjustments were made for age, sex, preoperative left ventricular ejection fraction (LVEF), and size of the prosthetic aortic valve, based on a directed acyclic graph ( Figure S1 ). Using a multivariable Cox proportional hazards regression analysis, it was investigated whether new-onset third-degree AV block and new-onset LBBB were associated with all-cause mortality after SAVR. Adjustments were made for age, sex, preoperative LVEF, and aortic valve morphology (BAV or TAV), based on a directed acyclic graph ( Figure S2 ). Interaction was assessed through the introduction of a multiplicative interaction term. We also conducted a stratified Cox proportional hazards regression analysis by aortic valve morphology to account for the inherent heterogeneity between the BAV-AS and TAV-AS groups. The assumption of proportional hazards was tested with the use of a 2-sided Schoenfeld residuals test, and there were no signs of violation of the proportional hazards assumption. The results from the Cox proportional hazards regression analysis are reported as crude hazard ratios (HR) and adjusted hazard ratios (aHRs) with 95% CI. The survival probability in the respective groups during follow-up after SAVR is graphically illustrated in Kaplan–Meier survival curves. In the subgroup analysis, univariable and multivariable logistic regression analyses were conducted to investigate the association between new-onset conduction disturbances and BAV subcategory. We only investigated subgroups with an event. The TAV cohort (n=558) served as a reference group. Adjustments were made for age, sex, preoperative LVEF, and size of the prosthetic aortic valve. Results are presented as crude OR and aOR with 95% CI.
Main Analysis Baseline Characteristics We included 1147 patients in the study, of which 589 (51.4%) had BAV and 558 (48.6%) had a TAV. The median follow-up time was 8.2 years (interquartile range, 5.6–11.8 years), which was similar for BAV AS and TAV AS patients (8.3 years versus 8.1 years; P =0.292). The baseline characteristics of the whole cohort are summarized in Table . BAV patients were younger (64 years versus 72 years of age; P <0.001) and more often male compared with TAV patients (63% versus 53%; P <0.001). The BAV patients had fewer cardiovascular comorbidities in terms of hypertension (53% versus 70%; P <0.001), diabetes (10% versus 22%; P <0.001), hypercholesterolemia (31% versus 42%; P <0.001), and preoperative atrial fibrillation (9% versus 15%; P <0.001), as well as chronic pulmonary disease (11% versus 13%; P =0.048). The perioperative characteristics are presented in the Table S1 . New-Onset Third-Degree AV Block The overall incidence of persisting third-degree AV block, requiring permanent pacemaker implantation during the index hospitalization after SAVR, was 4.5%, with a significant difference between BAV and TAV patients (6.5% versus 2.5%, P =0.001; crude OR, 2.68 [95% CI, 1.44–5.00]; P =0.002). The association between BAV and postoperative permanent pacemaker implantation remained significant after adjusting for age, sex, preoperative LVEF, and size of the prosthetic aortic valve (aOR, 2.42 [95% CI, 1.22–4.79]; P =0.011). On average, permanent pacemaker implantation occurred 5±2 days after SAVR, with no difference between BAV and TAV patients. New-Onset LBBB The overall incidence new-onset LBBB after SAVR was 7.8%, with a significantly greater incidence in the BAV-AS group compared with the TAV-AS group (9.7% versus 5.7%, P =0.001; crude OR, 1.76 [95% CI, 1.12–2.76]). In the multivariable logistic regression analysis, BAV remained an independent risk factor for new-onset LBBB (aOR, 1.74 [95% CI, 1.06–2.86]; P =0.029). Survival A total of 379 patients (28.6%) died during follow-up. Neither new-onset third-degree AV block nor new-onset LBBB were associated with worse prognoses in the unadjusted Cox proportional hazards regression model (Figure A). In the adjusted Cox proportional hazards regression model, new-onset LBBB after SAVR was associated with an increased all-cause mortality during follow-up (aHR, 1.60 [95% CI, 1.12–2.30]; P =0.011; Figure B), whereas new-onset third-degree AV block with subsequent permanent pacemaker implantation was not associated with worse prognosis during follow-up (aHR, 0.87 [95% CI, 0.46–1.64]; P =0.662; Figure B). New-onset LBBB after SAVR remained independently associated with all-cause mortality even after further adjustments for additional potential confounders ( Table S3 ). No significant interaction between new-onset LBBB and aortic valve morphology on all-cause mortality was observed ( P =0.753). When exploring the association between new-onset conduction disturbances and all-cause mortality in the stratified Cox proportional hazards regression analysis, the association between new-onset LBBB and all-cause mortality was borderline significant in the unadjusted model (HR, 1.65 [95% CI, 0.99–2.76]; P =0.056). In the adjusted model, new-onset LBBB (aHR, 1.69 [95% CI 1.01-2.86]; P =0.048; Figure ), but not third-degree AV block with subsequent permanent pacemaker implantation (aHR, 0.86 [95% CI, 0.38–1.98]; P =0.728), was associated an increased mortality in the BAV-AS cohort ( Figure S3 in the Supplemental Material ). Survival for BAV-AS patients with new-onset LBBB was similar to that of TAV-AS patients without new-onset LBBB (Figure ). Subgroup Analysis Baseline Characteristics It was possible to subcategorize the aortic valve morphology in 307 BAV-AS patients. Our distribution of BAV type 0 (8.5%), BAV with raphe between the left- and right-coronary cusps ([L/R-BAV] 69.1%), BAV with raphe between the right- and non-coronary cusps ([R/N-BAV] 19.5%), and BAV with raphe between the left- and non-coronary cusps ([L/N-BAV] 2.9%) was similar to that previously reported. Baseline characteristics in patients with different BAV morphologies categorized according to the Sievers and Schmidtke classification compared with TAV-AS patients are presented in Table S2 . New-Onset Third-Degree AV Block The incidence of new-onset third-degree AV block stratified according to Sievers and Schmidtke aortic valve morphology in the subgroup of 307 BAV-AS patients is presented in Figure A. Implantation of a postoperative pacemaker was most common within the R/N-BAV group (18.3%), followed by type 0 (11.5%) and type 1 L/R-BAV (6.6%). None of the 9 patients with L/N-BAV developed new-onset third-degree AV block. In an unadjusted logistic regression analysis, all BAV subtypes were associated with an increased risk of new-onset third-degree AV block with subsequent permanent pacemaker implantation compared with TAV-AS patients (Table ). After adjusting for age, sex, LVEF, and prosthetic valve size, this association remained significant for type 0 BAV-AS patients (aOR, 4.82 [95% CI, 1.17–19.95]; P =0.030) and for BAV-AS patients with R/N-BAV fusion (aOR, 8.33 [95% CI, 3.31–20.97]; P <0.001). After further adjustments for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ), type 0 BAV and R/N-BAV remained independently associated with new-onset third-degree AV block (Table ; Table S4 ). New-Onset LBBB A new-onset LBBB developed in 20.0% of R/N-BAV, 15.4% of type 0 BAV, 5.7% of L/R-BAV, and 0% of L/N-BAV (Figure B). In the logistic regression analysis, only R/N-BAV was associated with new-onset LBBB after adjusting for age, sex, LVEF, and prosthetic aortic valve size (aOR, 4.03, [95% CI, 1.84–8.82]; P <0.001; Table ). Table and Table S4 illustrates that R/N-BAV remained independently associated with new-onset LBBB after further adjusting for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ).
Baseline Characteristics We included 1147 patients in the study, of which 589 (51.4%) had BAV and 558 (48.6%) had a TAV. The median follow-up time was 8.2 years (interquartile range, 5.6–11.8 years), which was similar for BAV AS and TAV AS patients (8.3 years versus 8.1 years; P =0.292). The baseline characteristics of the whole cohort are summarized in Table . BAV patients were younger (64 years versus 72 years of age; P <0.001) and more often male compared with TAV patients (63% versus 53%; P <0.001). The BAV patients had fewer cardiovascular comorbidities in terms of hypertension (53% versus 70%; P <0.001), diabetes (10% versus 22%; P <0.001), hypercholesterolemia (31% versus 42%; P <0.001), and preoperative atrial fibrillation (9% versus 15%; P <0.001), as well as chronic pulmonary disease (11% versus 13%; P =0.048). The perioperative characteristics are presented in the Table S1 . New-Onset Third-Degree AV Block The overall incidence of persisting third-degree AV block, requiring permanent pacemaker implantation during the index hospitalization after SAVR, was 4.5%, with a significant difference between BAV and TAV patients (6.5% versus 2.5%, P =0.001; crude OR, 2.68 [95% CI, 1.44–5.00]; P =0.002). The association between BAV and postoperative permanent pacemaker implantation remained significant after adjusting for age, sex, preoperative LVEF, and size of the prosthetic aortic valve (aOR, 2.42 [95% CI, 1.22–4.79]; P =0.011). On average, permanent pacemaker implantation occurred 5±2 days after SAVR, with no difference between BAV and TAV patients. New-Onset LBBB The overall incidence new-onset LBBB after SAVR was 7.8%, with a significantly greater incidence in the BAV-AS group compared with the TAV-AS group (9.7% versus 5.7%, P =0.001; crude OR, 1.76 [95% CI, 1.12–2.76]). In the multivariable logistic regression analysis, BAV remained an independent risk factor for new-onset LBBB (aOR, 1.74 [95% CI, 1.06–2.86]; P =0.029). Survival A total of 379 patients (28.6%) died during follow-up. Neither new-onset third-degree AV block nor new-onset LBBB were associated with worse prognoses in the unadjusted Cox proportional hazards regression model (Figure A). In the adjusted Cox proportional hazards regression model, new-onset LBBB after SAVR was associated with an increased all-cause mortality during follow-up (aHR, 1.60 [95% CI, 1.12–2.30]; P =0.011; Figure B), whereas new-onset third-degree AV block with subsequent permanent pacemaker implantation was not associated with worse prognosis during follow-up (aHR, 0.87 [95% CI, 0.46–1.64]; P =0.662; Figure B). New-onset LBBB after SAVR remained independently associated with all-cause mortality even after further adjustments for additional potential confounders ( Table S3 ). No significant interaction between new-onset LBBB and aortic valve morphology on all-cause mortality was observed ( P =0.753). When exploring the association between new-onset conduction disturbances and all-cause mortality in the stratified Cox proportional hazards regression analysis, the association between new-onset LBBB and all-cause mortality was borderline significant in the unadjusted model (HR, 1.65 [95% CI, 0.99–2.76]; P =0.056). In the adjusted model, new-onset LBBB (aHR, 1.69 [95% CI 1.01-2.86]; P =0.048; Figure ), but not third-degree AV block with subsequent permanent pacemaker implantation (aHR, 0.86 [95% CI, 0.38–1.98]; P =0.728), was associated an increased mortality in the BAV-AS cohort ( Figure S3 in the Supplemental Material ). Survival for BAV-AS patients with new-onset LBBB was similar to that of TAV-AS patients without new-onset LBBB (Figure ).
We included 1147 patients in the study, of which 589 (51.4%) had BAV and 558 (48.6%) had a TAV. The median follow-up time was 8.2 years (interquartile range, 5.6–11.8 years), which was similar for BAV AS and TAV AS patients (8.3 years versus 8.1 years; P =0.292). The baseline characteristics of the whole cohort are summarized in Table . BAV patients were younger (64 years versus 72 years of age; P <0.001) and more often male compared with TAV patients (63% versus 53%; P <0.001). The BAV patients had fewer cardiovascular comorbidities in terms of hypertension (53% versus 70%; P <0.001), diabetes (10% versus 22%; P <0.001), hypercholesterolemia (31% versus 42%; P <0.001), and preoperative atrial fibrillation (9% versus 15%; P <0.001), as well as chronic pulmonary disease (11% versus 13%; P =0.048). The perioperative characteristics are presented in the Table S1 .
The overall incidence of persisting third-degree AV block, requiring permanent pacemaker implantation during the index hospitalization after SAVR, was 4.5%, with a significant difference between BAV and TAV patients (6.5% versus 2.5%, P =0.001; crude OR, 2.68 [95% CI, 1.44–5.00]; P =0.002). The association between BAV and postoperative permanent pacemaker implantation remained significant after adjusting for age, sex, preoperative LVEF, and size of the prosthetic aortic valve (aOR, 2.42 [95% CI, 1.22–4.79]; P =0.011). On average, permanent pacemaker implantation occurred 5±2 days after SAVR, with no difference between BAV and TAV patients.
The overall incidence new-onset LBBB after SAVR was 7.8%, with a significantly greater incidence in the BAV-AS group compared with the TAV-AS group (9.7% versus 5.7%, P =0.001; crude OR, 1.76 [95% CI, 1.12–2.76]). In the multivariable logistic regression analysis, BAV remained an independent risk factor for new-onset LBBB (aOR, 1.74 [95% CI, 1.06–2.86]; P =0.029).
A total of 379 patients (28.6%) died during follow-up. Neither new-onset third-degree AV block nor new-onset LBBB were associated with worse prognoses in the unadjusted Cox proportional hazards regression model (Figure A). In the adjusted Cox proportional hazards regression model, new-onset LBBB after SAVR was associated with an increased all-cause mortality during follow-up (aHR, 1.60 [95% CI, 1.12–2.30]; P =0.011; Figure B), whereas new-onset third-degree AV block with subsequent permanent pacemaker implantation was not associated with worse prognosis during follow-up (aHR, 0.87 [95% CI, 0.46–1.64]; P =0.662; Figure B). New-onset LBBB after SAVR remained independently associated with all-cause mortality even after further adjustments for additional potential confounders ( Table S3 ). No significant interaction between new-onset LBBB and aortic valve morphology on all-cause mortality was observed ( P =0.753). When exploring the association between new-onset conduction disturbances and all-cause mortality in the stratified Cox proportional hazards regression analysis, the association between new-onset LBBB and all-cause mortality was borderline significant in the unadjusted model (HR, 1.65 [95% CI, 0.99–2.76]; P =0.056). In the adjusted model, new-onset LBBB (aHR, 1.69 [95% CI 1.01-2.86]; P =0.048; Figure ), but not third-degree AV block with subsequent permanent pacemaker implantation (aHR, 0.86 [95% CI, 0.38–1.98]; P =0.728), was associated an increased mortality in the BAV-AS cohort ( Figure S3 in the Supplemental Material ). Survival for BAV-AS patients with new-onset LBBB was similar to that of TAV-AS patients without new-onset LBBB (Figure ).
Baseline Characteristics It was possible to subcategorize the aortic valve morphology in 307 BAV-AS patients. Our distribution of BAV type 0 (8.5%), BAV with raphe between the left- and right-coronary cusps ([L/R-BAV] 69.1%), BAV with raphe between the right- and non-coronary cusps ([R/N-BAV] 19.5%), and BAV with raphe between the left- and non-coronary cusps ([L/N-BAV] 2.9%) was similar to that previously reported. Baseline characteristics in patients with different BAV morphologies categorized according to the Sievers and Schmidtke classification compared with TAV-AS patients are presented in Table S2 . New-Onset Third-Degree AV Block The incidence of new-onset third-degree AV block stratified according to Sievers and Schmidtke aortic valve morphology in the subgroup of 307 BAV-AS patients is presented in Figure A. Implantation of a postoperative pacemaker was most common within the R/N-BAV group (18.3%), followed by type 0 (11.5%) and type 1 L/R-BAV (6.6%). None of the 9 patients with L/N-BAV developed new-onset third-degree AV block. In an unadjusted logistic regression analysis, all BAV subtypes were associated with an increased risk of new-onset third-degree AV block with subsequent permanent pacemaker implantation compared with TAV-AS patients (Table ). After adjusting for age, sex, LVEF, and prosthetic valve size, this association remained significant for type 0 BAV-AS patients (aOR, 4.82 [95% CI, 1.17–19.95]; P =0.030) and for BAV-AS patients with R/N-BAV fusion (aOR, 8.33 [95% CI, 3.31–20.97]; P <0.001). After further adjustments for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ), type 0 BAV and R/N-BAV remained independently associated with new-onset third-degree AV block (Table ; Table S4 ). New-Onset LBBB A new-onset LBBB developed in 20.0% of R/N-BAV, 15.4% of type 0 BAV, 5.7% of L/R-BAV, and 0% of L/N-BAV (Figure B). In the logistic regression analysis, only R/N-BAV was associated with new-onset LBBB after adjusting for age, sex, LVEF, and prosthetic aortic valve size (aOR, 4.03, [95% CI, 1.84–8.82]; P <0.001; Table ). Table and Table S4 illustrates that R/N-BAV remained independently associated with new-onset LBBB after further adjusting for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ).
It was possible to subcategorize the aortic valve morphology in 307 BAV-AS patients. Our distribution of BAV type 0 (8.5%), BAV with raphe between the left- and right-coronary cusps ([L/R-BAV] 69.1%), BAV with raphe between the right- and non-coronary cusps ([R/N-BAV] 19.5%), and BAV with raphe between the left- and non-coronary cusps ([L/N-BAV] 2.9%) was similar to that previously reported. Baseline characteristics in patients with different BAV morphologies categorized according to the Sievers and Schmidtke classification compared with TAV-AS patients are presented in Table S2 .
The incidence of new-onset third-degree AV block stratified according to Sievers and Schmidtke aortic valve morphology in the subgroup of 307 BAV-AS patients is presented in Figure A. Implantation of a postoperative pacemaker was most common within the R/N-BAV group (18.3%), followed by type 0 (11.5%) and type 1 L/R-BAV (6.6%). None of the 9 patients with L/N-BAV developed new-onset third-degree AV block. In an unadjusted logistic regression analysis, all BAV subtypes were associated with an increased risk of new-onset third-degree AV block with subsequent permanent pacemaker implantation compared with TAV-AS patients (Table ). After adjusting for age, sex, LVEF, and prosthetic valve size, this association remained significant for type 0 BAV-AS patients (aOR, 4.82 [95% CI, 1.17–19.95]; P =0.030) and for BAV-AS patients with R/N-BAV fusion (aOR, 8.33 [95% CI, 3.31–20.97]; P <0.001). After further adjustments for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ), type 0 BAV and R/N-BAV remained independently associated with new-onset third-degree AV block (Table ; Table S4 ).
A new-onset LBBB developed in 20.0% of R/N-BAV, 15.4% of type 0 BAV, 5.7% of L/R-BAV, and 0% of L/N-BAV (Figure B). In the logistic regression analysis, only R/N-BAV was associated with new-onset LBBB after adjusting for age, sex, LVEF, and prosthetic aortic valve size (aOR, 4.03, [95% CI, 1.84–8.82]; P <0.001; Table ). Table and Table S4 illustrates that R/N-BAV remained independently associated with new-onset LBBB after further adjusting for all potential confounders entered in the preconstructed directed acyclic graph ( Figure S1 ).
This is the first study to investigate the frequency and clinical implications of new-onset conduction disturbances after SAVR in BAV-AS and TAV-AS patients separately. The major findings are: (1) permanent pacemaker implantation and new-onset LBBB after SAVR occurred more frequently in BAV-AS patients than in TAV-AS patients; (2) new-onset LBBB was associated with an increased all-cause mortality, most likely driven by the worse outcome observed in BAV-AS patients; and (3) BAV-AS patients with fusion of the right- and non-coronary cusps had the greatest risk for developing clinically relevant conduction disorders after SAVR. We demonstrate that BAV-AS patients have a greater risk for developing new-onset LBBB after SAVR compared with TAV-AS patients. Although there was no interaction between new-onset LBBB and aortic valve morphology on all-cause mortality, we are concerned that new-onset LBBB is a highly relevant clinical issue in BAV-AS patients in particular. They developed new-onset LBBB to a greater extent than TAV-AS patients, and new-onset LBBB was independently associated with worse outcomes on follow-up. Our stratified analysis suggests that BAV-AS patients who develop a new-onset LBBB after SAVR have worse prognosis compared with BAV-AS patients who do not develop a new-onset LBBB. The prognostic relevance of new-onset LBBB after aortic valve replacement (either surgical or transcatheter) in terms of mortality has remained unclear, as studies report conflicting results, mainly because of short follow-up times which may obscure the long-term effects of a LBBB. , – In addition, previous studies have not investigated the prognostic impact of new-onset LBBB exclusively in BAV-AS and TAV-AS patients. In contrast to previous studies, we had an adequate follow-up to evaluate the detrimental effects of a new-onset LBBB in the long-term. Interestingly, survival in the new-onset LBBB group began to decline about 6 years after SAVR. As we investigated a SAVR cohort, and because new-onset LBBB was more common in BAV-AS patients, the mean age in our patients was considerably lower compared with patients in other studies, such as that from Nazif et al (64 years compared with 81 years of age, respectively). This is important considering the longer life-expectancy in patients undergoing SAVR in general, and BAV-AS patients in particular. Our observation that survival of BAV-AS patients with new-onset LBBB is similar to that of TAV-AS patients without new-onset LBBB is therefore intriguing. LBBB is associated with insufficient reverse left ventricular remodeling and failure to improve postprocedural LVEF, as it causes a mechanical dyssynchrony with deteriorating diastolic and systolic left ventricular function, which can progress to congestive heart failure. , Both heart failure and LBBB itself are associated with progressive conduction disturbances, resulting in an increased risk for life-threatening bradyarrhythmias or ventricular tachyarrhythmias and sudden cardiac death. , Heart failure may also present with acute mechanical failure and resulting cardiogenic shock. This is important because BAV-AS patients already have worse left ventricular diastolic and systolic function before SAVR compared with TAV-AS patients. , Although we did not investigate specific causes of death in the present study, we believe that the increased mortality observed in this group is mainly because of complications related to the LBBB. The incidence of new-onset third-degree AV block with subsequent permanent pacemaker implantation was also higher for BAV-AS patients as compared with TAV-AS patients. It is known that right ventricular pacing may cause pacing-induced cardiomyopathy, which is associated with worse prognoses, and that having a cardiac implantable electronic device is associated with an increased risk for infection complications. , , , In contrast with findings of other studies, , , there was no association between postoperative permanent pacemaker implantation after SAVR and mortality during follow-up in our study. This was probably attributable to the relatively few postoperative pacemaker events in this cohort. The risk for developing severe postoperative conduction disturbances is associated with surgical and direct mechanical insults to vital parts of the conduction system, where the AV node, bundle of His and left bundle branch are located in close proximity to the commissure between the right- and non-coronary aortic valve cusps. Previous studies have established BAV as a risk factor for permanent pacemaker implantation after SAVR, , but it has never been investigated whether this depends on the morphology of the BAV. In our study, we found that BAV-AS patients with fusion of the right- and non-coronary cusps had a 4-fold increased risk of new-onset LBBB and an 8-fold increased risk of permanent pacemaker implantation after SAVR. Our findings suggest that the previously proposed risk increase for BAV patients in general is a result of the significantly increased risk in the R/N-BAV subtype in particular. This is most likely explained by the anatomic location of the raphe, as it is in close proximity to vital parts of the cardiac conduction system. The fibrotic and calcific burden is generally higher at the site of the raphe and the adjacent annular region. – Theoretically, the extensive decalcification of the aortic valve, the raphe and the aortic annulus during aortic valve replacement could inflict damages to the conduction system, resulting in severe postoperative conduction disturbances. Although surgical damage to the conduction system most likely explains the increased frequency of conduction disturbances in these patients, an underlying pathology of the conduction system could also contribute to the risk of new-onset conduction disorders. A previous study has found that BAV patients without valvular disease have significantly longer His ventricular intervals, as well as an increased risk for permanent pacemaker during follow-up, compared with TAV patients, suggesting that BAV patients may also be more prone to conduction disturbances before SAVR. In our study, we excluded patients with preoperative RBBB or LBBB and permanent pacemaker, but the prevalence of preoperative conduction disorders did not differ between BAV-AS and TAV-AS patients. Additionally, the PQ interval was similar in the different aortic valve morphologies. This argues towards a mechanical damage of the cardiac conduction system during surgery rather than an underlying primary pathology of the conduction system in BAV-AS patients. Clinical Implications and Future Directions The findings of the present study, which is the first to investigate the correlation between aortic valve morphology and clinically relevant conduction disorders after SAVR, have some important clinical implications. The risk of new-onset clinically relevant conduction disorders after SAVR is higher in BAV-AS patients compared with TAV-AS patients, despite their younger age and overall lower prevalence of cardiovascular risk factors. Furthermore, new-onset LBBB was associated with worse prognosis in BAV-AS patients, which stresses the importance of close monitoring to prevent serious complications during follow-up. Notably, 38% of patients with the R/N-BAV morphology in our study developed either a need for a postoperative permanent pacemaker (18%) or a new-onset LBBB (20.0%). The knowledge obtained regarding the relation between BAV subtypes and risk for postoperative conduction disorder provide vital information that could influence the management of these patients with regards to surgical technique, prosthetic valve size and postoperative monitoring. Extra consideration during decalcification of the raphe and aortic annulus should be taken in R/N–BAV-AS patients. Our findings could influence the individual assessment of when to intervene in patients with severe AS. Earlier intervention in R/N–BAV-AS patients could result in less extensive calcification of the aortic valve and aortic annulus, potentially decreasing the risk of postoperative permanent pacemaker implantation and new-onset LBBB. This is supported by our observation that severe preoperative heart failure symptoms (New York Heart Association functional class III or IV) were associated with an increased risk of postoperative permanent pacemaker implantation. Finding the etiology behind aortic valve degeneration in BAV-AS and TAV-AS could help identifying therapies to halt or inhibit progressive aortic valve calcification and thereby limit AS related complications, including postoperative permanent pacemaker implantation and new-onset LBBB. We suggest that a preoperative cardiac multidetector computed tomography for determination of aortic valve morphology , may optimize the management of AS patients referred for SAVR, and provide useful knowledge in communicating potential risks of the surgical intervention with the patients. Future studies should focus on how to mitigate the risk of permanent pacemaker implantation and new-onset LBBB in R/N–BAV-AS patients. To further investigate the relationship of BAV morphology and postoperative permanent pacemaker need, future studies should also focus on which patients remain pacemaker-dependent and which patients develop a future need. Limitations This was an observational, single-center study, therefore the results might not be generalized to other populations. However, the distribution of the different BAV subtypes was representative to what has been previously reported. Another limitation is the small sample size in some of the BAV-AS subgroups; including more patients would enable more confident conclusions. Our BAV-AS cohort is large and very well-characterized, containing consecutive patients without any patient selection. Despite this, the BAV subtype was only reported in 52% of the BAV patients. The clinical relevance of the findings of the present study stresses the importance of a standardized reporting of aortic valve morphology. Conclusions New-onset LBBB after SAVR was associated with worse prognosis during follow-up. Compared with TAV-AS patients, BAV-AS patients have a higher risk of developing severe conduction disorders after SAVR for severe AS, particularly those with fusion of the right- and non-coronary cusps. Preoperative assessment of the aortic valve morphology may be conducted routinely to optimize management of patients referred for SAVR.
The findings of the present study, which is the first to investigate the correlation between aortic valve morphology and clinically relevant conduction disorders after SAVR, have some important clinical implications. The risk of new-onset clinically relevant conduction disorders after SAVR is higher in BAV-AS patients compared with TAV-AS patients, despite their younger age and overall lower prevalence of cardiovascular risk factors. Furthermore, new-onset LBBB was associated with worse prognosis in BAV-AS patients, which stresses the importance of close monitoring to prevent serious complications during follow-up. Notably, 38% of patients with the R/N-BAV morphology in our study developed either a need for a postoperative permanent pacemaker (18%) or a new-onset LBBB (20.0%). The knowledge obtained regarding the relation between BAV subtypes and risk for postoperative conduction disorder provide vital information that could influence the management of these patients with regards to surgical technique, prosthetic valve size and postoperative monitoring. Extra consideration during decalcification of the raphe and aortic annulus should be taken in R/N–BAV-AS patients. Our findings could influence the individual assessment of when to intervene in patients with severe AS. Earlier intervention in R/N–BAV-AS patients could result in less extensive calcification of the aortic valve and aortic annulus, potentially decreasing the risk of postoperative permanent pacemaker implantation and new-onset LBBB. This is supported by our observation that severe preoperative heart failure symptoms (New York Heart Association functional class III or IV) were associated with an increased risk of postoperative permanent pacemaker implantation. Finding the etiology behind aortic valve degeneration in BAV-AS and TAV-AS could help identifying therapies to halt or inhibit progressive aortic valve calcification and thereby limit AS related complications, including postoperative permanent pacemaker implantation and new-onset LBBB. We suggest that a preoperative cardiac multidetector computed tomography for determination of aortic valve morphology , may optimize the management of AS patients referred for SAVR, and provide useful knowledge in communicating potential risks of the surgical intervention with the patients. Future studies should focus on how to mitigate the risk of permanent pacemaker implantation and new-onset LBBB in R/N–BAV-AS patients. To further investigate the relationship of BAV morphology and postoperative permanent pacemaker need, future studies should also focus on which patients remain pacemaker-dependent and which patients develop a future need.
This was an observational, single-center study, therefore the results might not be generalized to other populations. However, the distribution of the different BAV subtypes was representative to what has been previously reported. Another limitation is the small sample size in some of the BAV-AS subgroups; including more patients would enable more confident conclusions. Our BAV-AS cohort is large and very well-characterized, containing consecutive patients without any patient selection. Despite this, the BAV subtype was only reported in 52% of the BAV patients. The clinical relevance of the findings of the present study stresses the importance of a standardized reporting of aortic valve morphology.
New-onset LBBB after SAVR was associated with worse prognosis during follow-up. Compared with TAV-AS patients, BAV-AS patients have a higher risk of developing severe conduction disorders after SAVR for severe AS, particularly those with fusion of the right- and non-coronary cusps. Preoperative assessment of the aortic valve morphology may be conducted routinely to optimize management of patients referred for SAVR.
Acknowledgments The authors express their sincere gratitude to their colleagues at the Department of Cardiothoracic Surgery and Anesthesiology, Uppsala University Hospital, Uppsala, Sweden. Sources of Funding This study was supported by Lennander Foundation; Erik, Karin and Gösta Selander Foundation; Royal Society of Arts and Scientists; Uppsala County Association Against Heart and Lung Diseases; Swedish Heart and Lung Association; Uppsala County Council; and Åke Senning’s memory. The sponsors had no role in study design or writing of the manuscript. Disclosures Drs Grinnemo, Rodin, and Simonson are shareholders at AVulotion AB. The other authors report no conflicts. Supplemental Material Figures S1–S3 Tables S1–S4
The authors express their sincere gratitude to their colleagues at the Department of Cardiothoracic Surgery and Anesthesiology, Uppsala University Hospital, Uppsala, Sweden.
This study was supported by Lennander Foundation; Erik, Karin and Gösta Selander Foundation; Royal Society of Arts and Scientists; Uppsala County Association Against Heart and Lung Diseases; Swedish Heart and Lung Association; Uppsala County Council; and Åke Senning’s memory. The sponsors had no role in study design or writing of the manuscript.
Drs Grinnemo, Rodin, and Simonson are shareholders at AVulotion AB. The other authors report no conflicts.
Figures S1–S3 Tables S1–S4
|
Microfluidic Purification and Concentration of Malignant Pleural Effusions for Improved Molecular and Cytomorphological Diagnostics | 8d813307-9a0b-4cf1-bffd-73579102345d | 3810139 | Pathology[mh] | The pleural space surrounds the lungs and is lined by the pleural sac. Under certain conditions, including malignancies, this space can fill with excess fluid, resulting in a pleural effusion. Thoracentesis is a procedure to remove this pleural fluid, both for diagnostic and therapeutic purposes. Over 1.5 million thoracentesis procedures are conducted annually in the United States . Many cell types may be present within the pleural effusion, and identifying and isolating these cells is important to identify the ongoing disease process. Cytologists analyze pleural samples to determine the cause (presence or absence of cancer) by examining stained cell smears on a glass slide. Sample preparation and analysis requires technician-intensive sample handling involving multiple centrifugation steps followed by staining and time-consuming manual microscopic scanning of cytology slides by the cytopathologist, who must search for key cancer cell morphological characteristics, such as high nuclear-to-cytoplasmic ratios, hypochromatic cytoplasms, and dense, dark nuclei. Disseminated cancer cells originating from the lung, breast, or other organs can be identified in malignant pleural effusions. Traditional cytomorphological analysis of cell smears and blocks has high specificity, but low sensitivity. The low sensitivity can be either due to the subjective nature of analysis, loss of tumor cells during processing, or the fact that there may be few tumor cells present in a large specimen volume with obscuring blood. In up to 40% of cases, traditional cytological examinations may fail to identify malignant cells . Therefore, approaches to obtaining these malignant cells from larger volumes of fluid with high purity and efficiency could improve cytology-based diagnoses . Additional applications for purified cells from pleural and other body fluids include the ability to probe cellular properties such as cell deformability , , evaluation of effusion microenvironments , and identifying cellular metastases . Tumor cells present in effusions may also be the only easily accessible source of malignant cells following cancer relapse, particularly in cases where the original mass was removed. Isolating cancer cells within malignant effusions may therefore be critical in order to perform molecular analysis or other tests to determine the up-to-date mutational status of the cancer that is crucial for the determination of targeted therapy . Previous studies have indicated that only a subset of pleural effusions are suitable for mutational analysis because of the presence of large quantities of contaminating cells; malignant pleural fluids are almost always bloody, with large populations of leukocytes and/or erythrocytes . Specifically, the leukocytes contain wild type DNA that may interfere with the detection of genetic mutations of interest . Increasing sample purity enables improved molecular diagnostics to detect the presence of specific genetic mutations which may be amenable to targeted therapies. This can be achieved by removing a large population of leukocytes that contain interfering wild-type DNA. For genetic testing such as quantitative PCR (qPCR), the presence of a small quantity of mutated genes can be overshadowed by a large background of wild type nucleic acids. Using qPCR, the cycle threshold (Ct) gives a relative measurement of the amount of genetic material of interest that is present; a lower Ct indicates a greater amount of the gene of interest. Although qPCR can be exquisitely sensitive for mutation detection given appropriate selection of amplification primers, there is often some non-specific amplification from background DNA. The presence of large quantities of background DNA can thus interfere with accurate measurement of the Ct due to this non-specific amplification; this effect may still be notable even after normalization with housekeeping genes. There are several approaches that are currently utilized to isolate cells of interest from pleural effusions for molecular analysis. The gold standard is laser capture microdissection (LCM), a technique used to isolate pure populations from cytology fluids, live cell culture, or heterogeneous tissue sections , , , , . However, this technique requires drying out of cells during capture, which can lead to cell damage and is not capable of extracting large quantities of cells for analysis. It is also very time and labor intensive. Flow cytometry and fluorescence activated cell sorting (FACS) are also common methods for cell separation and sorting. While FACS can process samples of up to 30 mL in 1 hr, the sorted cells may not be suitable for further analysis as a result of the initial fixing and cell type-specific staining required for the sorting process. Microfluidic technology is an emerging tool that may deliver automated, well-controlled platforms to purify target cells with the highest possible sensitivity and specificity. Several strategies have been used to isolate and enrich tumor cells in body fluid , such as the use of self-assembled magnetic beads coated with anti-CD19 antibodies to capture B-cell malignant tumors . However, current technologies are limited by throughput and purity, and none have been placed in widespread use in clinical labs for a variety of reasons. Many devices also focus on rare cell isolation from blood rather than tumor cell enrichment from pleural effusions, which have unique fluid properties and cellular profiles. Ideally, rapid sampling of pleural fluids (often liters of fluid) requires mL/min processing rates and separation using a label-free marker such as cell size . Moreover, sample preparation of pleural effusions should be performed in an automated, repeatable fashion to enable clinicians and cytopathologists to perform molecular assays on the purified cells with the highest possible sensitivity and specificity in a short time period (tens of minutes). We have previously demonstrated a potentially low cost, miniaturized microfluidic system that recapitulates the high-throughput operations of enrichment and concentration of a standard laboratory centrifuge (the “Centrifuge Chip”) . Here, we use the Centrifuge Chip for the isolation of cancer cells and mesothelial cells at high purity from pleural effusions as a preparation step for downstream analysis by traditional cytology and mutational analysis ( ). By processing a large volume of fluid and selectively enriching larger cells over a background of red and white blood cells we replace the traditional centrifugation step in the clinical lab while also potentially enabling more sensitive analysis of purer preparations originating from large volume samples. Briefly, the approach employs unique inertial fluid physics to selectively collect larger cells (such as tumor cells) in laminar fluid microvortices at high rates without clog-prone filters , ( , ). Smaller leukocytes and erythrocytes are not stably trapped in vortices and are significantly reduced in the collected concentrated sample ( ). We have also implemented fluid plumbing automation to process samples and release isolated cells back into a small volume, under the control of a custom-written software program ( , ). Each Centrifuge Chip processes effusions at a flow rate of 6 mL/min and concentrates larger cells (mesothelial and epithelial). Purified cells are released and made readily available in a collection vial or micro-titer plate for cytology and identifying gene mutations.
Ethics Statement The study is exempt from institutional review board approval because remnants of patient specimens were processed and analyzed anonymously, with no access to protected health information, personal identifying information, or sensitive information. The study is exempt under 45 CFR 46.101 Category 4, which includes “Research involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.” ( http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.101 ) Microfluidic Device Fabrication and Setup Devices were fabricated using standard photolithography and polydimethylsiloxane replica molding techniques. The devices were designed in AutoCAD (Autodesk) and printed on a transparency photomask at 20,000 dots per inch (CAD/Art Service, Inc.). The mold was photolithographically defined using this mask in the UCLA Nanoelectronics Research Facility. Negative photoresist, KMPR 1050 (MicroChem), was spun at 2400 rpm for 30 s on a 10-cm silicon wafer. The wafer was soft-baked at 100°C for 15 min, exposed under near UV for 30 s, post-baked at 100°C for 4 min, and developed in SU-8 Developer (MicroChem). The height of the resulting feature was measured to be 55 µm using a profilometer (Veeco Metrology). Polydimethylsiloxane (PDMS) (Sylgard 184 Dow Corning Corp.) was poured onto the photoresist master at a 10∶1 ratio of base to crosslinker, degassed in a vacuum chamber, and cured at 65°C overnight. The devices were then cut from the mold, ports were punched with a punch kit (Technical Innovations), and the devices were bonded to glass slides using oxygen plasma for 30 s (Harrick Plasma). After plasma treatment and placement onto the glass substrate, the devices were maintained at 65°C in an oven for 15 min to increase bonding. Cell Trapping Mechanism The mechanism of operation is based on size-dependent inertial lift, which leads to selective entry and stable orbits for larger cells within vortices created in an expansion reservoir , . Smaller cells do not experience sufficient lift force and therefore either do not enter the vortex, or do not have enough restoring lift force to remain stable within the vortices in the presence of de-stabilizing disturbances from other orbiting particles. In our previous work we identified reservoir geometries and flow conditions to selectively collect cells and particles above ∼15 µm, with capture efficiencies of ∼20% for MCF7 cells spiked in diluted blood . The rectangular reservoirs are 480 µm wide and 720 µm long, and the straight channels are 40 µm wide. In this work, we made several device modifications including 1) the integration with a custom-made pressure system that operates using a simple ‘plug-and-play’ option in which an operator does not need to be present at all times, 2) the shortening of the device channel length to reduce fluidic resistance, and 3) the increase of the number of parallel channels to 16 with 4 chambers in each channel for a total of 128 cell trapping reservoirs to process samples at a flow rate of 6 mL/min. At this flow rate one patient sample (∼50 ml of volume) takes <10 minutes to process. The capture efficiency of the device was ∼47%, which was defined as the number of 20 µm diameter beads caught and released from the vortices divided by the total number of beads injected. Sample Processing using a Computer-Controlled Pressure System The device is connected to a custom-made pressure system that delivers effusion samples or saline wash from pressurized glass bottles through the Centrifuge Chip ( ). The Labview-controlled system contains a pair of air regulators, air valves and liquid valves (SMC Corporation) that brings compressed air into the bottles and drives fluid through the microchip device. Non-diluted pleural effusion samples are poured directly into the glass bottle and introduced through the device at 6 mL/min. Once the vortex traps are filled with cells, PBS is introduced into the device to wash out untrapped blood cells in the main flow and the vortex traps. Cells trapped in the fluid vortex are released by reducing the input air pressure and subsequently lowering the flow rate and dissipating the vortex. We implement a ‘trap-and-release’ program that can continuously introduce sample through the Centrifuge Chip, wash, and release the captured cells in a small 250 µL volume into a microtiter plate or collection vial. A video of sample processing was recorded using Phantom Camera Control and Software (Vision Research Inc.) with a high-speed camera (Phantom v7.3). Sample Collection and Preparation Remnants of 115 pleural effusion samples obtained from Ronald Reagan UCLA Medical Center, Santa Monica UCLA Medical Center, and Northridge Hospital Medical Center were used in our study. From all specimens, up to 50 mL of sample were processed with the Centrifuge Chip. Effusions were passed through a 40 µm cell strainer before introducing through the Centrifuge Chip system. Half of the processed samples were returned to the cytology laboratory to create cell smears. This was performed in parallel with cell smears produced with traditional cytological methods on original, unprocessed samples. The other half of processed samples were fluorescently labeled to quantify sample purity ( ). A fraction of samples were profiled for cell size distributions before and after processing. Cell Smear Preparation and Imaging Smears were prepared according to normal methods to prepare samples for clinical evaluation. Briefly, samples were aliquoted into 50 mL conical tubes and centrifuged down with a standard benchtop centrifuge. After centrifugation, the supernatant is aspirated and the cells are resuspended in a buffer solution and placed with a glass slide into a cytocentrifuge (Thermo Scientific) to create a cell smear. The cell slides are air dried or fixed and stained with Papanicolou (Pap) or May-Grunwald-Giemsa (MGG) stains. Fluorescent Staining for Purity Measurements For each specimen, 300 µL of the original effusion was transferred into one well of a 96-well microtiter plate. To compare the processed sample versus the original sample, up to 10 mL effusion volume was processed with the Centrifuge Chip and isolated cells were released in a volume of ∼250 microliters in the microtiter plate. To determine the composition of the cell population, leukocyte, epithelial and nuclear stains were used. After centrifuging the cells to the bottom of the well with a plate centrifuge (Beckman Coulter), the supernatant was aspirated. Cells were treated with 4% v/v formaldehyde for 15 min, permeabilized with 0.4% v/v Triton X-100 (Sigma-Aldrich) for 7 min, and incubated with cytokeratin (CK)-PE (epithelial and mesothelial cells), CD45-FITC (white blood cells), and DAPI (nucleus) (Invitrogen) in 2% w/v BSA. Between each step, cells were sedimented with the centrifuge and washed with PBS. After staining, the cells were imaged using a CCD camera (Photometrics CoolSNAP HQ2) mounted on a Nikon Eclipse Ti microscope. The whole well was automatically imaged in a few minutes (100X) using an ASI motorized stage operated with Nikon NIS-Elements AR 3.2 software. Captured images were automatically obtained for four configurations: brightfield, FITC, TRITC and DAPI filter sets. Collected images were automatically stitched together using the NIS-Elements Software. Images were analyzed by enumerating the number of CK+ and CD45+ cells present in each well. Purity is defined as the number of CK+ cells divided by the total number of nucleated cells. CK+ cells include carcinoma cells and mesothelial cells. We did not attempt to separate tumor cells from mesothelial cells as these cells share a similar size, but these separations can be carried out using IHC markers such as Calretinin , if necessary, to further enrich a specimen. Quantification of Cell Size Dilute volumes of unprocessed and processed pleural samples were lysed with red blood cell lysis buffer (Roche) and incubated with Calcein AM (Invitrogen) for 15 minutes. Cells were imaged using a Nikon Eclipse Ti fluorescent microscope, and cell sizes were automatically measured using Nikon NIS-Elements AR 3.2 software. Detection of KRAS Gene Mutations in Spiked Pleural Effusion Samples We evaluated the performance of the Centrifuge Chip to improve the accuracy of mutational analysis by extracting molecular information from spiked pleural effusions before and after enrichment to determine the potential improvement to qPCR measurement provided by high purity capture. Initial cell concentration was quantified from a small pleural sample aliquot using a hemacytometer after performing a red blood cell lysis step. A549 lung cancer cells (ATCC) were spiked at 0.1% purity in 50 mL of pleural effusions diagnosed as negative for malignancy. Spiked samples were evaluated for a known activating mutation in KRAS found in A549 cells, which can provide resistance to targeted therapies . Specifically, we looked at the 34 G>A substitution in KRAS (KRAS*) as identified by the Sanger Cosmic database . We utilized quantitative RT-PCR to identify mutant KRAS in the A549 cells versus wild type (in HeLa and other cells) using a modification of the system described by Morlan et al. (2009) . We used a primer complementary to the mutant and a blocking primer with a nonhydrolyzable phosphate group complementary to the wild type sequence which was present at four times the concentration. The rationale for this strategy was that the non-hydrolyzable primer would block non-specific amplification from the wild type sequence while still allowing amplification from the mutant of interest. GAPDH mRNA was also amplified as an indicator for the relative number of cells in a given sample and used to normalize each measurement to determine the ΔCt value. Briefly, reverse transcription was performed using a SuperScript III RT kit (Invitrogen) according to the manufacturer's instructions to create cDNA libraries. TaqMan PCR was performed using 2 uL of the RT product in a 20 uL total volume with 1× TaqMan Universal PCR Master Mix (no UNG) (Roche) with the primers at 900 nM, TaqMan probe at 200 nM, and blocker at 3600 nM. Stock TaqMan probes for GAPDH and KRAS were obtained from Applied Biosystems and used without modification. The thermocycling conditions were as follows: 10 minutes at 95°C, 40 cycles of 20 seconds at 95°C and 1 minute at 60°C.
The study is exempt from institutional review board approval because remnants of patient specimens were processed and analyzed anonymously, with no access to protected health information, personal identifying information, or sensitive information. The study is exempt under 45 CFR 46.101 Category 4, which includes “Research involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.” ( http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html#46.101 )
Devices were fabricated using standard photolithography and polydimethylsiloxane replica molding techniques. The devices were designed in AutoCAD (Autodesk) and printed on a transparency photomask at 20,000 dots per inch (CAD/Art Service, Inc.). The mold was photolithographically defined using this mask in the UCLA Nanoelectronics Research Facility. Negative photoresist, KMPR 1050 (MicroChem), was spun at 2400 rpm for 30 s on a 10-cm silicon wafer. The wafer was soft-baked at 100°C for 15 min, exposed under near UV for 30 s, post-baked at 100°C for 4 min, and developed in SU-8 Developer (MicroChem). The height of the resulting feature was measured to be 55 µm using a profilometer (Veeco Metrology). Polydimethylsiloxane (PDMS) (Sylgard 184 Dow Corning Corp.) was poured onto the photoresist master at a 10∶1 ratio of base to crosslinker, degassed in a vacuum chamber, and cured at 65°C overnight. The devices were then cut from the mold, ports were punched with a punch kit (Technical Innovations), and the devices were bonded to glass slides using oxygen plasma for 30 s (Harrick Plasma). After plasma treatment and placement onto the glass substrate, the devices were maintained at 65°C in an oven for 15 min to increase bonding.
The mechanism of operation is based on size-dependent inertial lift, which leads to selective entry and stable orbits for larger cells within vortices created in an expansion reservoir , . Smaller cells do not experience sufficient lift force and therefore either do not enter the vortex, or do not have enough restoring lift force to remain stable within the vortices in the presence of de-stabilizing disturbances from other orbiting particles. In our previous work we identified reservoir geometries and flow conditions to selectively collect cells and particles above ∼15 µm, with capture efficiencies of ∼20% for MCF7 cells spiked in diluted blood . The rectangular reservoirs are 480 µm wide and 720 µm long, and the straight channels are 40 µm wide. In this work, we made several device modifications including 1) the integration with a custom-made pressure system that operates using a simple ‘plug-and-play’ option in which an operator does not need to be present at all times, 2) the shortening of the device channel length to reduce fluidic resistance, and 3) the increase of the number of parallel channels to 16 with 4 chambers in each channel for a total of 128 cell trapping reservoirs to process samples at a flow rate of 6 mL/min. At this flow rate one patient sample (∼50 ml of volume) takes <10 minutes to process. The capture efficiency of the device was ∼47%, which was defined as the number of 20 µm diameter beads caught and released from the vortices divided by the total number of beads injected.
The device is connected to a custom-made pressure system that delivers effusion samples or saline wash from pressurized glass bottles through the Centrifuge Chip ( ). The Labview-controlled system contains a pair of air regulators, air valves and liquid valves (SMC Corporation) that brings compressed air into the bottles and drives fluid through the microchip device. Non-diluted pleural effusion samples are poured directly into the glass bottle and introduced through the device at 6 mL/min. Once the vortex traps are filled with cells, PBS is introduced into the device to wash out untrapped blood cells in the main flow and the vortex traps. Cells trapped in the fluid vortex are released by reducing the input air pressure and subsequently lowering the flow rate and dissipating the vortex. We implement a ‘trap-and-release’ program that can continuously introduce sample through the Centrifuge Chip, wash, and release the captured cells in a small 250 µL volume into a microtiter plate or collection vial. A video of sample processing was recorded using Phantom Camera Control and Software (Vision Research Inc.) with a high-speed camera (Phantom v7.3).
Remnants of 115 pleural effusion samples obtained from Ronald Reagan UCLA Medical Center, Santa Monica UCLA Medical Center, and Northridge Hospital Medical Center were used in our study. From all specimens, up to 50 mL of sample were processed with the Centrifuge Chip. Effusions were passed through a 40 µm cell strainer before introducing through the Centrifuge Chip system. Half of the processed samples were returned to the cytology laboratory to create cell smears. This was performed in parallel with cell smears produced with traditional cytological methods on original, unprocessed samples. The other half of processed samples were fluorescently labeled to quantify sample purity ( ). A fraction of samples were profiled for cell size distributions before and after processing.
Smears were prepared according to normal methods to prepare samples for clinical evaluation. Briefly, samples were aliquoted into 50 mL conical tubes and centrifuged down with a standard benchtop centrifuge. After centrifugation, the supernatant is aspirated and the cells are resuspended in a buffer solution and placed with a glass slide into a cytocentrifuge (Thermo Scientific) to create a cell smear. The cell slides are air dried or fixed and stained with Papanicolou (Pap) or May-Grunwald-Giemsa (MGG) stains.
For each specimen, 300 µL of the original effusion was transferred into one well of a 96-well microtiter plate. To compare the processed sample versus the original sample, up to 10 mL effusion volume was processed with the Centrifuge Chip and isolated cells were released in a volume of ∼250 microliters in the microtiter plate. To determine the composition of the cell population, leukocyte, epithelial and nuclear stains were used. After centrifuging the cells to the bottom of the well with a plate centrifuge (Beckman Coulter), the supernatant was aspirated. Cells were treated with 4% v/v formaldehyde for 15 min, permeabilized with 0.4% v/v Triton X-100 (Sigma-Aldrich) for 7 min, and incubated with cytokeratin (CK)-PE (epithelial and mesothelial cells), CD45-FITC (white blood cells), and DAPI (nucleus) (Invitrogen) in 2% w/v BSA. Between each step, cells were sedimented with the centrifuge and washed with PBS. After staining, the cells were imaged using a CCD camera (Photometrics CoolSNAP HQ2) mounted on a Nikon Eclipse Ti microscope. The whole well was automatically imaged in a few minutes (100X) using an ASI motorized stage operated with Nikon NIS-Elements AR 3.2 software. Captured images were automatically obtained for four configurations: brightfield, FITC, TRITC and DAPI filter sets. Collected images were automatically stitched together using the NIS-Elements Software. Images were analyzed by enumerating the number of CK+ and CD45+ cells present in each well. Purity is defined as the number of CK+ cells divided by the total number of nucleated cells. CK+ cells include carcinoma cells and mesothelial cells. We did not attempt to separate tumor cells from mesothelial cells as these cells share a similar size, but these separations can be carried out using IHC markers such as Calretinin , if necessary, to further enrich a specimen.
Dilute volumes of unprocessed and processed pleural samples were lysed with red blood cell lysis buffer (Roche) and incubated with Calcein AM (Invitrogen) for 15 minutes. Cells were imaged using a Nikon Eclipse Ti fluorescent microscope, and cell sizes were automatically measured using Nikon NIS-Elements AR 3.2 software.
We evaluated the performance of the Centrifuge Chip to improve the accuracy of mutational analysis by extracting molecular information from spiked pleural effusions before and after enrichment to determine the potential improvement to qPCR measurement provided by high purity capture. Initial cell concentration was quantified from a small pleural sample aliquot using a hemacytometer after performing a red blood cell lysis step. A549 lung cancer cells (ATCC) were spiked at 0.1% purity in 50 mL of pleural effusions diagnosed as negative for malignancy. Spiked samples were evaluated for a known activating mutation in KRAS found in A549 cells, which can provide resistance to targeted therapies . Specifically, we looked at the 34 G>A substitution in KRAS (KRAS*) as identified by the Sanger Cosmic database . We utilized quantitative RT-PCR to identify mutant KRAS in the A549 cells versus wild type (in HeLa and other cells) using a modification of the system described by Morlan et al. (2009) . We used a primer complementary to the mutant and a blocking primer with a nonhydrolyzable phosphate group complementary to the wild type sequence which was present at four times the concentration. The rationale for this strategy was that the non-hydrolyzable primer would block non-specific amplification from the wild type sequence while still allowing amplification from the mutant of interest. GAPDH mRNA was also amplified as an indicator for the relative number of cells in a given sample and used to normalize each measurement to determine the ΔCt value. Briefly, reverse transcription was performed using a SuperScript III RT kit (Invitrogen) according to the manufacturer's instructions to create cDNA libraries. TaqMan PCR was performed using 2 uL of the RT product in a 20 uL total volume with 1× TaqMan Universal PCR Master Mix (no UNG) (Roche) with the primers at 900 nM, TaqMan probe at 200 nM, and blocker at 3600 nM. Stock TaqMan probes for GAPDH and KRAS were obtained from Applied Biosystems and used without modification. The thermocycling conditions were as follows: 10 minutes at 95°C, 40 cycles of 20 seconds at 95°C and 1 minute at 60°C.
Measurement of Cell Size Distributions in Pleural Effusions Our system operates by selectively isolating cells above a particular size threshold , . Therefore, we first measured detailed information on the number and diameter of cells present in 25 pleural fluid samples ( ). Possible cytological diagnoses for pleural effusions included: positive for malignancy, suspicious or equivocal (atypical) for malignancy, and negative for malignancy ( ). Patient samples diagnosed as negative for malignancy were often concurrently diagnosed with acute inflammation, chronic inflammation, or lymphocytosis. Cytologically, samples with acute inflammation were associated with an increased neutrophil population; those with chronic inflammation were associated with a larger fraction of lymphocytes and histiocytes, while those with lymphocytosis were associated with increased lymphocytes. In cases positive for malignancy, the tissue of origin was often known from patient history. We observed a population of cells greater than 15 µm in malignant samples ( ), consistent with the observation that malignant and mesothelial cells are usually larger compared to other cells present within these fluids . Cases of inflammation had a large population of 10–15 µm cells, potentially representing the characteristic population of large activated immune cells. Of the samples diagnosed as Positive for Malignancy, on average 36.57% of nucleated cells were larger than 15 µm. A lower percentage of larger cells was present in samples diagnosed as Negative and Negative with Inflammation (32.47% and 26.92% of nucleated cells larger than 15 µm, respectively). Cases with inflammation are known to have a larger number of white blood cells as a fraction of the population, thus leading to a lower relative percentage of larger cells than negative samples alone. Note that these relatively large percentages of larger cells in non-malignant samples are likely the result of the presence of mesothelial cells and large activated leukocytes. Still, malignant samples contain the largest fraction of large cells such that cell size is a potential biomarker for harvesting malignant cells from pleural fluid samples. The Centrifuge Chip enriches for the cell populations greater than 15 µm ( ). The Centrifuge Chip Increases Purity of Clinical Samples Qualitatively, the Centrifuge Chip delivered a higher purity sample compared to unprocessed or centrifuged specimens ( ). The device increased purity in all 66 cases examined (100%) ( , ). Paired t-tests between unprocessed and processed samples demonstrated a significant increase in purity, with p values less than 0.05 for all diagnoses. In agreement with our cell size measurements, we observed many cells captured for malignancy-positive cases and fewer for malignancy-negative cases with lymphocytosis, reactive changes, or acute inflammation. Additionally, the purity increased from unprocessed to Centrifuge Chip processed specimens ( ). Purity fold is defined as the purity of a chip-concentrated sample over the initial unprocessed sample. Greater than 65-fold increase was observed for samples diagnosed as positive for malignancy. Interestingly, samples with chronic inflammation had a 132-fold increase as a result of the larger leukocyte populations in the initial samples with <1% purity. Higher purities can be achieved by increasing the critical size cutoff of the Centrifuge Chip to further reduce leukocyte capture, although at the cost of potentially not trapping smaller malignant cells of interest. Reduced Nonspecific Background from Cytology Slides and Reduced Sample Area We addressed issues of reducing background cell populations and limiting the area of microscopic evaluation by using the Centrifuge Chip to create concentrated and low background cell smears. In all samples, malignant and mesothelial cells are found amongst a cellular background of red and white blood cells in standard cytology slides while there are few background cells observed in the Centrifuge Chip-prepared sample slides ( ). As expected from our cell measurements above, we concentrated mesothelial and malignant cells in samples diagnosed as positive for malignancy ( ). Malignant cells are characterized by large nuclei and high nuclear-cytoplasmic ratio. Malignant cells are often seen as cell aggregates or clumps in effusions and these cell populations were also collected with the Centrifuge Chip. The Centrifuge Chip may aid pathologists in rapid visualization of rarer malignant cells which may improve diagnostic sensitivity especially by enabling processing of larger volumes of fluid into a minimal final concentrated sample volume. Effect of Purity on KRAS Gene Mutation Detection Non-specific amplification from background cells can reduce confidence when measuring the presence of mutations. Pure populations of 10 5 A549 cells which contain the KRAS* mutation and HeLa cells which have wild type KRAS were measured to have threshold cycles (Ct) of 24.10 and 32.53 respectively (the latter value indicating nonspecific amplification for wild type KRAS from HeLa cells). In mixed samples of A549 and HeLa we found that the presence of the specific KRAS mutation could be distinguished from background at as low as 0.1% purity of A549 cells, with as few as 10 A549 cells present ( ). Note that the same Ct values can be observed from low numbers of A549 cells with specific amplification occurring, or large numbers of HeLa cells with non-specific amplification (see 10,000 HeLa cells vs. 10 A549 cells, ). Therefore, we normalized the data to account for cell number by subtracting GAPDH Ct from KRAS Ct values, yielding a KRAS* ΔCt. As expected, increased purity samples yielded improved results, characterized by a lower ΔCt ( ). The Centrifuge Chip also improved the sensitivity and specificity in detecting A549 cells spiked into negative clinical effusion samples at 0.1% purity ( ). Unspiked negative samples (including acute and chronic inflammation samples) averaged a KRAS* ΔCt of 15.7±1.76 (N = 7), and spiked samples at 0.1% purity averaged 12.8±1.39. Once processed with the Centrifuge Chip, the KRAS* ΔCt decreased and became further differentiated from the negative samples in all cases ( ) with an average of 9.6±1.19 ΔCt. Average GAPDH Ct values were 17.63±2.10, 17.80±2.05, and 23.75±2.03 for negative samples, unprocessed spiked samples, and processed spiked samples, respectively. A paired, t-test between non-spiked and spiked samples demonstrated improved statistical significance in the difference in the average KRAS* ΔCt after spiked samples were processed with the Centrifuge Chip (p = 0.0027 before and p = 1.44e−6 after processing). Moreover, using a Gaussian distribution fit to ΔCt values for each group of samples, receiver operating characteristic curves ( ) demonstrated improved area under curve (AUC) values from 0.905 (unprocessed spiked samples) to 0.998 (processed spiked samples). The upper cutoff threshold for a positive KRAS* ΔCt diagnosis was determined by maximizing both sensitivity and specificity, and it was found to be 14.1 for unprocessed and 12.2 for processed samples. By increasing sample purity with the device, we are able to improve KRAS mutation detection and diagnostic confidence. For a highly specific assay like PCR, our simple concentration approach has the potential to improve diagnostic accuracy for mutational detection when the original mutation is known (e.g., in a sequenced tumor). We expect our technique will be particularly useful in less specific assays, such as gene sequencing, if a particular gene mutation is suspected but the source unknown. As next generation sequencing technologies improve, it may even be possible to do whole transcriptome sequencing, and achieving high purity using an approach such as this would be essential to detect mutations of interest while suppressing non-specific wild-type reads.
Our system operates by selectively isolating cells above a particular size threshold , . Therefore, we first measured detailed information on the number and diameter of cells present in 25 pleural fluid samples ( ). Possible cytological diagnoses for pleural effusions included: positive for malignancy, suspicious or equivocal (atypical) for malignancy, and negative for malignancy ( ). Patient samples diagnosed as negative for malignancy were often concurrently diagnosed with acute inflammation, chronic inflammation, or lymphocytosis. Cytologically, samples with acute inflammation were associated with an increased neutrophil population; those with chronic inflammation were associated with a larger fraction of lymphocytes and histiocytes, while those with lymphocytosis were associated with increased lymphocytes. In cases positive for malignancy, the tissue of origin was often known from patient history. We observed a population of cells greater than 15 µm in malignant samples ( ), consistent with the observation that malignant and mesothelial cells are usually larger compared to other cells present within these fluids . Cases of inflammation had a large population of 10–15 µm cells, potentially representing the characteristic population of large activated immune cells. Of the samples diagnosed as Positive for Malignancy, on average 36.57% of nucleated cells were larger than 15 µm. A lower percentage of larger cells was present in samples diagnosed as Negative and Negative with Inflammation (32.47% and 26.92% of nucleated cells larger than 15 µm, respectively). Cases with inflammation are known to have a larger number of white blood cells as a fraction of the population, thus leading to a lower relative percentage of larger cells than negative samples alone. Note that these relatively large percentages of larger cells in non-malignant samples are likely the result of the presence of mesothelial cells and large activated leukocytes. Still, malignant samples contain the largest fraction of large cells such that cell size is a potential biomarker for harvesting malignant cells from pleural fluid samples. The Centrifuge Chip enriches for the cell populations greater than 15 µm ( ).
Qualitatively, the Centrifuge Chip delivered a higher purity sample compared to unprocessed or centrifuged specimens ( ). The device increased purity in all 66 cases examined (100%) ( , ). Paired t-tests between unprocessed and processed samples demonstrated a significant increase in purity, with p values less than 0.05 for all diagnoses. In agreement with our cell size measurements, we observed many cells captured for malignancy-positive cases and fewer for malignancy-negative cases with lymphocytosis, reactive changes, or acute inflammation. Additionally, the purity increased from unprocessed to Centrifuge Chip processed specimens ( ). Purity fold is defined as the purity of a chip-concentrated sample over the initial unprocessed sample. Greater than 65-fold increase was observed for samples diagnosed as positive for malignancy. Interestingly, samples with chronic inflammation had a 132-fold increase as a result of the larger leukocyte populations in the initial samples with <1% purity. Higher purities can be achieved by increasing the critical size cutoff of the Centrifuge Chip to further reduce leukocyte capture, although at the cost of potentially not trapping smaller malignant cells of interest.
We addressed issues of reducing background cell populations and limiting the area of microscopic evaluation by using the Centrifuge Chip to create concentrated and low background cell smears. In all samples, malignant and mesothelial cells are found amongst a cellular background of red and white blood cells in standard cytology slides while there are few background cells observed in the Centrifuge Chip-prepared sample slides ( ). As expected from our cell measurements above, we concentrated mesothelial and malignant cells in samples diagnosed as positive for malignancy ( ). Malignant cells are characterized by large nuclei and high nuclear-cytoplasmic ratio. Malignant cells are often seen as cell aggregates or clumps in effusions and these cell populations were also collected with the Centrifuge Chip. The Centrifuge Chip may aid pathologists in rapid visualization of rarer malignant cells which may improve diagnostic sensitivity especially by enabling processing of larger volumes of fluid into a minimal final concentrated sample volume.
Non-specific amplification from background cells can reduce confidence when measuring the presence of mutations. Pure populations of 10 5 A549 cells which contain the KRAS* mutation and HeLa cells which have wild type KRAS were measured to have threshold cycles (Ct) of 24.10 and 32.53 respectively (the latter value indicating nonspecific amplification for wild type KRAS from HeLa cells). In mixed samples of A549 and HeLa we found that the presence of the specific KRAS mutation could be distinguished from background at as low as 0.1% purity of A549 cells, with as few as 10 A549 cells present ( ). Note that the same Ct values can be observed from low numbers of A549 cells with specific amplification occurring, or large numbers of HeLa cells with non-specific amplification (see 10,000 HeLa cells vs. 10 A549 cells, ). Therefore, we normalized the data to account for cell number by subtracting GAPDH Ct from KRAS Ct values, yielding a KRAS* ΔCt. As expected, increased purity samples yielded improved results, characterized by a lower ΔCt ( ). The Centrifuge Chip also improved the sensitivity and specificity in detecting A549 cells spiked into negative clinical effusion samples at 0.1% purity ( ). Unspiked negative samples (including acute and chronic inflammation samples) averaged a KRAS* ΔCt of 15.7±1.76 (N = 7), and spiked samples at 0.1% purity averaged 12.8±1.39. Once processed with the Centrifuge Chip, the KRAS* ΔCt decreased and became further differentiated from the negative samples in all cases ( ) with an average of 9.6±1.19 ΔCt. Average GAPDH Ct values were 17.63±2.10, 17.80±2.05, and 23.75±2.03 for negative samples, unprocessed spiked samples, and processed spiked samples, respectively. A paired, t-test between non-spiked and spiked samples demonstrated improved statistical significance in the difference in the average KRAS* ΔCt after spiked samples were processed with the Centrifuge Chip (p = 0.0027 before and p = 1.44e−6 after processing). Moreover, using a Gaussian distribution fit to ΔCt values for each group of samples, receiver operating characteristic curves ( ) demonstrated improved area under curve (AUC) values from 0.905 (unprocessed spiked samples) to 0.998 (processed spiked samples). The upper cutoff threshold for a positive KRAS* ΔCt diagnosis was determined by maximizing both sensitivity and specificity, and it was found to be 14.1 for unprocessed and 12.2 for processed samples. By increasing sample purity with the device, we are able to improve KRAS mutation detection and diagnostic confidence. For a highly specific assay like PCR, our simple concentration approach has the potential to improve diagnostic accuracy for mutational detection when the original mutation is known (e.g., in a sequenced tumor). We expect our technique will be particularly useful in less specific assays, such as gene sequencing, if a particular gene mutation is suspected but the source unknown. As next generation sequencing technologies improve, it may even be possible to do whole transcriptome sequencing, and achieving high purity using an approach such as this would be essential to detect mutations of interest while suppressing non-specific wild-type reads.
We have developed the Centrifuge Chip, a microfluidic device which can rapidly isolate larger, potentially malignant cells from pleural effusions in a label-free manner with high purity, using size as a biomarker. This chip has several advantages over currently available techniques including speed, robust operation, and ability to process large volumes of sample and concentrate cells into a small end volume. We were able to prepare effusion specimens in ten minutes, an order of magnitude faster than other similar techniques, with increased purity. We also demonstrated that processing by the chip provides improved accuracy detection of mutations with qPCR. This system allows for rapid purification and isolation of cells of interest and has the potential to enable cytopathologists, clinicians, and researchers access to purified cells for preparing cytology slides, detecting specific gene mutations for targeted drug therapies, culturing cells for further analysis, or even isolating of single cells for next generation sequencing analysis at lower cost than currently available techniques. Improved mutational detection at lower cost from readily available body fluids provides a compelling route towards making targeted anti-cancer therapies a broad clinical reality.
Figure S1 Centrifuge Chip System Schematic and Operations. Sample processing is controlled using an automated pressure system comprised of an air tank, pressure regulators, air and liquid valves, and a computer with a LabVIEW (National Instruments) user interface. A liquid valve upstream from the device switches between the saline wash and pleural sample bottles, and the downstream valve directs fluid between the waste and collection containers. The procedure involves three key steps, including: i.) processing the fluid sample to capture potential cancer cells, ii) washing the device reservoirs to remove smaller leukocytes and RBCs while maintaining the same flow rate and active microvortices to keep larger cells trapped, and iii) lowering the flow rate to release the captured cells from the vortices and into a 96-well plate. (TIF) Click here for additional data file. Figure S2 Sample Processing Flow with the Centrifuge Chip. 50 mL of pleural effusion sample were processed using traditional cytological methods and the Centrifuge Chip. A portion of cells harvested from the Centrifuge Chip was returned to the cytopathology laboratory to create cell smears; the other portion of processed sample was immunolabeled for purity analysis. (TIF) Click here for additional data file. Figure S3 Effect of cell number and purity on PCR. (A) Quantitative RT-PCR was performed on cell lines with varying cell number. Ct values for KRAS* (solid line) and GAPDH (dotted line) decreased with increasing cell number. KRAS* Ct for samples with 1,000 HeLa cells or fewer was not detected. (B) KRAS* ΔCt decreases with increasing purity of A549 cells spiked in a larger population of HeLa cells. (TIF) Click here for additional data file. Video S1 Microfluidic processing of a pleural effusion. A real-time video is shown for Centrifuge Chip processing of a bloody patient pleural effusion specimen. The microfluidic chip is first primed with an isotonic solution, and the automated pressure setup and computerized system controls i) patient sample infusion, ii) solution exchange, and iii) cell release. When the sample is initially infused, the flow rate increases until vortices develop, which is apparent when cells begin to occupy two lateral vortices beside the central flow stream. Next, an upstream valve switches the injection fluid from the specimen bottle to the isotonic wash bottle. During the solution exchange, the remaining small blood cells are observed to wash away, leaving only the stably trapped large cells. Finally, cells are collected off-chip by lowering the wash flow rate to dissipate the vortices and release the cells. The process is repeated as necessary to collect more cells or to process the entire patient sample. Sample infusion time is adjusted for each specimen so that microvortices do not become over-saturated with cells prior to release; infusion times used in this study ranged from 10 seconds to 3 minutes, depending on the sample cellularity. (WMV) Click here for additional data file. Table S1 Complete Summary of 115 Patient Pleural Fluids Used in the Study. Pos = positive for malignancy, Sus = suspicious for malignancy, N = negative for malignancy, R = reactive changes, L = lymphocytosis, CI = chronic inflammation, and AI = acute inflammation. Purity is defined as the number of CK+/DAPI+ cells over the total number of cells. (DOC) Click here for additional data file.
|
Brightfield Multiplex Immunohistochemistry Assay for PD-L1 Evaluation in Challenging Melanoma Samples | 9bf5690b-52e3-4518-8f39-d8bbf03d5b40 | 11371108 | Anatomy[mh] | Tissue Samples Formalin-fixed paraffin-embedded (FFPE) melanoma brain metastasis (MBM) tissues samples (n=10) were retrospectively selected. Tissue sections, 3µm in thickness, were obtained from paraffin blocks retrospectively selected from the Archive of the Section of Pathology, Department of Health Sciences, University of Florence, Florence, Italy. A lymph node melanoma metastases sample has been used as an internal control to validate the RED chromogen singleplex stain protocol. The clinicopathologic features and treatments of the case-cohort are detailed in Table . Ethics Approval The use of formalin-fixed and paraffin-embedded (FFPE) samples of human tissue was approved by the Local Ethics Committee “Comitato Etico Regione Toscana-Area Vasta Centro (CEAVC)” (13676_bio; 22156_bio). This study was performed in accordance with the Declaration of Helsinki. Immunohistochemistry Immunohistochemistry was performed on Ventana automated stainer BenchMark ULTRA. A measure of 3µm sections were deparaffinized in EZ prep (#950-102; Ventana), and antigen retrieval was achieved by incubation with cell-conditioning solution 1 (#950-124; Ventana), a Tris ethylenediaminetetraacetic acid-based buffer (pH 8.2) both for singleplex and multiplex IHC. Singleplex IHC Sections were incubated with the following primary antibodies: anti-CD4 (#790-4423, rabbit monoclonal, clone SP35, ready to use; Ventana Medical System, Tucson, AZ), anti-CD8 (#790-4460, rabbit monoclonal, clone SP57, ready to use; Ventana Medical System), anti-FoxP3 (#14-477-82, mouse monoclonal, clone 236A/E7; Invitrogen, USA), anti-CD68 (#PDM065, mouse monoclonal, ready to use, clone PGM1; Diagnostics BioSystem, USA), anti-CD163 (#760-4437, clone MRQ-26, mouse monoclonal, ready to use; Ventana Medical System), and anti-PD-L1 (741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). The signal was developed with the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems), sections were counterstained with eematoxylin (#760-2021, ready to use; Ventana Medical Systems). PD-L1/SOX10 Multiplex IHC Sections were incubated with the following primary antibodies: anti-SOX10 (#760-4968, rabbit monoclonal, clone SP267, ready to use; Ventana Medical Systems) and anti-PD-L1 (#741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). For double labeling, each denaturation step was done by treating the slides with Reaction buffer (#950-300; Ventana Medical Systems) for 8 minutes, at 95°C. For SOX10 chromogenic detection, OptiView DAB IHC Detection Kit (#760-700; Ventana Medical Systems) was used. For PD-L1 chromogenic detection, the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems Tucson) was used. Finally, sections were counterstained with hematoxylin (#760-2021, ready to use; Ventana Medical Systems). Formalin-fixed paraffin-embedded (FFPE) melanoma brain metastasis (MBM) tissues samples (n=10) were retrospectively selected. Tissue sections, 3µm in thickness, were obtained from paraffin blocks retrospectively selected from the Archive of the Section of Pathology, Department of Health Sciences, University of Florence, Florence, Italy. A lymph node melanoma metastases sample has been used as an internal control to validate the RED chromogen singleplex stain protocol. The clinicopathologic features and treatments of the case-cohort are detailed in Table . The use of formalin-fixed and paraffin-embedded (FFPE) samples of human tissue was approved by the Local Ethics Committee “Comitato Etico Regione Toscana-Area Vasta Centro (CEAVC)” (13676_bio; 22156_bio). This study was performed in accordance with the Declaration of Helsinki. Immunohistochemistry was performed on Ventana automated stainer BenchMark ULTRA. A measure of 3µm sections were deparaffinized in EZ prep (#950-102; Ventana), and antigen retrieval was achieved by incubation with cell-conditioning solution 1 (#950-124; Ventana), a Tris ethylenediaminetetraacetic acid-based buffer (pH 8.2) both for singleplex and multiplex IHC. Singleplex IHC Sections were incubated with the following primary antibodies: anti-CD4 (#790-4423, rabbit monoclonal, clone SP35, ready to use; Ventana Medical System, Tucson, AZ), anti-CD8 (#790-4460, rabbit monoclonal, clone SP57, ready to use; Ventana Medical System), anti-FoxP3 (#14-477-82, mouse monoclonal, clone 236A/E7; Invitrogen, USA), anti-CD68 (#PDM065, mouse monoclonal, ready to use, clone PGM1; Diagnostics BioSystem, USA), anti-CD163 (#760-4437, clone MRQ-26, mouse monoclonal, ready to use; Ventana Medical System), and anti-PD-L1 (741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). The signal was developed with the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems), sections were counterstained with eematoxylin (#760-2021, ready to use; Ventana Medical Systems). PD-L1/SOX10 Multiplex IHC Sections were incubated with the following primary antibodies: anti-SOX10 (#760-4968, rabbit monoclonal, clone SP267, ready to use; Ventana Medical Systems) and anti-PD-L1 (#741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). For double labeling, each denaturation step was done by treating the slides with Reaction buffer (#950-300; Ventana Medical Systems) for 8 minutes, at 95°C. For SOX10 chromogenic detection, OptiView DAB IHC Detection Kit (#760-700; Ventana Medical Systems) was used. For PD-L1 chromogenic detection, the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems Tucson) was used. Finally, sections were counterstained with hematoxylin (#760-2021, ready to use; Ventana Medical Systems). Sections were incubated with the following primary antibodies: anti-CD4 (#790-4423, rabbit monoclonal, clone SP35, ready to use; Ventana Medical System, Tucson, AZ), anti-CD8 (#790-4460, rabbit monoclonal, clone SP57, ready to use; Ventana Medical System), anti-FoxP3 (#14-477-82, mouse monoclonal, clone 236A/E7; Invitrogen, USA), anti-CD68 (#PDM065, mouse monoclonal, ready to use, clone PGM1; Diagnostics BioSystem, USA), anti-CD163 (#760-4437, clone MRQ-26, mouse monoclonal, ready to use; Ventana Medical System), and anti-PD-L1 (741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). The signal was developed with the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems), sections were counterstained with eematoxylin (#760-2021, ready to use; Ventana Medical Systems). Sections were incubated with the following primary antibodies: anti-SOX10 (#760-4968, rabbit monoclonal, clone SP267, ready to use; Ventana Medical Systems) and anti-PD-L1 (#741-4905, rabbit monoclonal, clone SP263, ready to use; Ventana Medical Systems). For double labeling, each denaturation step was done by treating the slides with Reaction buffer (#950-300; Ventana Medical Systems) for 8 minutes, at 95°C. For SOX10 chromogenic detection, OptiView DAB IHC Detection Kit (#760-700; Ventana Medical Systems) was used. For PD-L1 chromogenic detection, the UltraView Universal RED detection kit (#760-501; Ventana Medical Systems Tucson) was used. Finally, sections were counterstained with hematoxylin (#760-2021, ready to use; Ventana Medical Systems). Comparison Between DAB and RED Chromogens We first validated singleplex staining with RED chromogen for PD-L1 in the Ventana Benchmark Ultra platform. To investigate RED chromogen performance, IHC staining with PD-L1 was performed with the 2 different chromogens available in routine diagnostic, RED and DAB, which is currently considered the gold standard for singleplex staining, as shown in Figure . DAB remains one of the most used chromogens for IHC, maintaining a high level of contrast and clearness (Fig. B). Indeed, it exhibits many desirable features, including that DAB precipitates are virtually insoluble in aqueous and organic solvents. In contrast, as shown in Figure C, RED chromogen guarantees a good contrast and brightness, resulting in IHC stains whose visualization is comparable with those employing DAB but that provides a better contrast with melanic pigmentation, significantly reducing, in our assessment, the risk of improper interpretations in pigmented lesions. Validation of Double-Labeling PD-L1/SOX10 Protocol Starting from validated singleplex staining protocols routinely used, we first assessed whether these singleplex IHC methods could be combined; in particular, we combined SOX10 singleplex protocol with DAB chromogen with PD-L1 singleplex protocol with RED chromogen (Fig. ). Since these antigens are present on different cell compartments, that is, the cell membrane for PD-L1 and the nucleus for SOX10, it is recommended to start the multiplex sequence with the antibody directed against nuclear antigens, then proceed with the one against the membrane. According to our experience, this sequencing allows for better development of both signals, without one masking the other. As shown in Figure C, multiplex IHC (Fig. C) provides many advantages over singleplex IHC (Fig. B). Double labeling provides the tools necessary to identify PD-L1 + melanoma cells (Fig. C) clearly. The simultaneous visualization of 2 different molecular targets allows the topographical relationship between the 2 labeling to be evaluated within the context of the tissue morphology. Furthermore, in melanoma samples in which the inflammatory infiltrate is strongly present, as shown in Figure , especially in nodal involvement and distant metastases (panels A to F), this new double-labeling protocol allowed to clearly distinguish PD-L1 + melanoma cells to PD-L1 + immune cells, as can be seen in Figures G and H, which illustrates how strong PD-L1 membranous expression in histiocytes and lymphocytes might lead to an overestimate of the TPS score, in this metastatic melanoma sample. We first validated singleplex staining with RED chromogen for PD-L1 in the Ventana Benchmark Ultra platform. To investigate RED chromogen performance, IHC staining with PD-L1 was performed with the 2 different chromogens available in routine diagnostic, RED and DAB, which is currently considered the gold standard for singleplex staining, as shown in Figure . DAB remains one of the most used chromogens for IHC, maintaining a high level of contrast and clearness (Fig. B). Indeed, it exhibits many desirable features, including that DAB precipitates are virtually insoluble in aqueous and organic solvents. In contrast, as shown in Figure C, RED chromogen guarantees a good contrast and brightness, resulting in IHC stains whose visualization is comparable with those employing DAB but that provides a better contrast with melanic pigmentation, significantly reducing, in our assessment, the risk of improper interpretations in pigmented lesions. Starting from validated singleplex staining protocols routinely used, we first assessed whether these singleplex IHC methods could be combined; in particular, we combined SOX10 singleplex protocol with DAB chromogen with PD-L1 singleplex protocol with RED chromogen (Fig. ). Since these antigens are present on different cell compartments, that is, the cell membrane for PD-L1 and the nucleus for SOX10, it is recommended to start the multiplex sequence with the antibody directed against nuclear antigens, then proceed with the one against the membrane. According to our experience, this sequencing allows for better development of both signals, without one masking the other. As shown in Figure C, multiplex IHC (Fig. C) provides many advantages over singleplex IHC (Fig. B). Double labeling provides the tools necessary to identify PD-L1 + melanoma cells (Fig. C) clearly. The simultaneous visualization of 2 different molecular targets allows the topographical relationship between the 2 labeling to be evaluated within the context of the tissue morphology. Furthermore, in melanoma samples in which the inflammatory infiltrate is strongly present, as shown in Figure , especially in nodal involvement and distant metastases (panels A to F), this new double-labeling protocol allowed to clearly distinguish PD-L1 + melanoma cells to PD-L1 + immune cells, as can be seen in Figures G and H, which illustrates how strong PD-L1 membranous expression in histiocytes and lymphocytes might lead to an overestimate of the TPS score, in this metastatic melanoma sample. In this technical report, we validated for the first time a multiplex double-labeling PD-L1 (clone SP263)/SOX10 (clone SP267) protocol in Ventana Benchmark Ultra for the evaluation of PD-L1 IHC staining in challenging melanoma cases. Scoring PD-L1 in melanoma could be difficult for its variable and widespread expression, especially in cases with a low TPS, in which tumor cells show complete or incomplete membranous immunoreactivity of low intensity and in highly pigmentation. We showed that these techniques produce an immunohistochemical stain that could help pathologists to provide a better interpretation of PD-L1, compared with the singleplex standard alone. Assay validation studies are beneficial in the context of PD-L1’s evaluation as, because of the many issues we introduced, it tends to have very high discordance rates between different clones, platforms, laboratories, and individual pathologists, hindering its usefulness. , A recent study focused on the intrapathologist reproducibility of PD-L1 scoring and showed the highest disagreement in melanoma samples, with an intraclass correlation coefficient (ICC) of 0.08 and 0.20 for TPS and CPS, respectively, which was improved using a double-labeling (SOX10 and PD-L1) technique, reaching an “excellent agreement” for the TPS score. Our work is meant to strengthen these results, validating this multiplex technique in Ventana Benchmark Ultra with PD-L1 SP263 clone. Furthermore, we argue that the use of the RED chromogen for PD-L1 in melanoma improves the readability of the stain. There is no complete agreement on the ideal scoring methods to employ for PD-L1, whether ones that consider the tumor cells (TCs) only, such as the TPS, or others that include immune cells (ICs) also, such as the MELscore or CPS. For instance, in Mercier’s study, reproducibility was higher for the TPS score, but Darmon-Novello and colleagues argued that using MELscore led to a higher concordance, as measured with a kappa coefficient. As for these scores’ clinical relevance, much evidence supports the importance of PD-L1’s expression on TCs specifically. These issues have been reported for non–small-cell lung cancer (NSCLC) as well as melanoma. Several studies have observed that the TPS has a higher interpathologist concordance and that the staining on ICs itself is more variable between different assays. , Considering these results, it appears that, in general, scores that include the staining on ICs tend to be less reproducible than those that only account for TCs. Therefore, tools that provide help in accurately detecting the cell compartment in which PD-L1 is expressed are sorely needed. A universally accepted and reproducible IHC protocol for PD-L1 would provide the ideal base for its implementation as a predictive marker. Immunotherapies targeting immune checkpoint receptors such as programmed death-1 (PD-1), programmed death ligand 1 (PD-L1), or cytotoxic T-lymphocyte antigen-4 (CTLA-4) have recently revolutionized the treatment and achieved unprecedented benefit in survival of advanced melanoma patients. , Based on the efficacy results of the phase III CheckMate 067 trial, nivolumab in combination with ipilimumab, is one of the first-line standard options for advanced melanoma. Combinations of these checkpoint therapies with other agents are now being explored to improve outcomes and enhance benefit-risk profiles of treatment and it is crucial to identify predictive, reliable biomarkers to improve patient selection. In the checkmate 067 phase III trial, Wolchok et al showed a significant difference in response to different ICIs based on PD-L1 evaluation in melanoma tissue samples. In particular, melanoma patients with PD-L1 levels <1% (tested with the 28-8 pharmDx assay) showed a better prognosis in the double-agent treatment arm, with a 5-year overall survival (OS) of 50%, when compared with the single-agent arm (with 5-y OS of 36% with Nivolumab alone). In comparison, patients in the PD-L1 ≥1% group did not show significant benefits with the addition of Ipilimumab to Nivolumab (5-y OS were 52% for the single-agent arm and 54% for the double-agent arm, respectively. Some authorities are already releasing treatment eligibility guidelines based on PD-L1’s expression. For instance, considering the results published by Wolchok and colleagues in the Checkmate 067 trial, the Italian Drug Administration Agency (AIFA) authorized the use of combination immunotherapy in advanced-stage melanoma patients with PD-L1 expression <1%. Moreover, the European Medicines Agency (EMA) approved the use of combination immunotherapy of nivolumab and relatlimab (anti-Lag 3) as first-line treatment for advanced-stage melanoma in adults and adolescents 12 years of age and older with tumor cell PD-L1 expression <1%. The main limitation of this study is that the proposed PD-L1/SOX10 multiplexing protocol can run only on the Ventana Benchmark Ultra platform, and although it is prevalent, it is not universally adopted. Moreover, if PD-L1/SOX10 multiplex IHC staining is to be implemented in the clinical setting as a companion test for melanoma, standardized protocols, and validated techniques to face the many challenges of its evaluation are a necessity in every IHC platforms available. Furthermore, this analysis was performed on a small sample size, and future larger studies are needed to test this technique’s real-world applicability and effectiveness in implementing the interpathologist agreement and reproducibility. |
Perception of Pediatric Oncological Patients and Their Parents/Guardians about a Hospital Oral Health Program: A Qualitative Study | a6c6db13-745e-4aac-91cf-92e62b86e010 | 9272641 | Patient Education as Topic[mh] | The risk of life inherent to cancer and the symptoms associated with its treatment have physical, psychological and social impacts on patients (Jibb et al., 2018). Children and adolescents often perceive antineoplastic therapy as something unpleasant and restrictive, and they suffer from separation from family members and friends given the long and frequent hospitalization periods (França et al., 2018). Cancer treatment is also capable of generating significant adverse effects in the oral cavity, which may negatively impact the oral health of cancer patients (Sonis and Yuan, 2016). Oral mucositis, gingival bleeding, dry mouth, viral and fungal infections are among the oral complications of treatment and these manifestations are more frequent in children than in adults (Velten et al., 2017). Oral mucositis is identified as the most debilitating complication of cancer treatment (Zhu et al., 2017). It is characterized by inflammation of the mucosa which lines the oral cavity, with an appearance of erythematous areas which can develop into large ulcers (Peterson et al., 2012). There are situations of intense pain and risk of secondary infection from ulcers, even requiring to reduce or suspend cancer treatment, which can worsen the patient’s prognosis, which demonstrates the importance of managing this condition (Sonis and Yuan, 2016). Moreover, there is risk of sepsis related to degree of oral mucosal barrier breakdown (Peterson et al., 2015). Some studies show that care measures and education in oral health can reduce the severity of oral mucositis in children under antineoplastic treatment (Cheng et al, 2004; Yavuz and Yılmaz, 2015). Therefore, oral care programs are considered viable and effective strategies to prevent oral mucositis in pediatric cancer patients (Qutob et al., 2015). A relevant aspect to be considered in cancer patients is the change in their lifestyle caused by frequent and prolonged hospital stays (França et al., 2018). Changes in daily dynamics can impact self-care, making it difficult to maintain adequate behaviors to maintain oral health during antineoplastic treatment (Cheng, 2009). In this scenario, the importance of the dentist’s role in the oncology team is evident, as it is possible to minimize oral complications resulting from antineoplastic therapy through practicing routine oral care and early diagnosis of changes which can negatively impact cancer treatment (Velten et al., 2017; Ribeiro et al., 2021). Regarding this aspect, we instituted a planned mouth care education program to pediatric cancer patients - Oral Health Education and Prevention Program (OHEPP) and it was effective to reduce the incidence of oral mucositis in these patients (Bezerra et al., 2021). Therefore, this study aimed to evaluate the perception of pediatric cancer patients and their parents/guardians about this educational and preventive oral healthcare program (OHEPP) implemented in a reference hospital for cancer treatment.
This qualitative follow-up investigation was methodologically guided by the Discourse of the Collective Subject (DCS) method and it was conducted from April to October 2018. The Paraíba Federal University Research Ethics Committee approved the study (CAAE: 83179518.4.0000.5188). The Informed Consent Form (ICF) was applied to parents/guardians and the Informed Assent Form (IAF) was applied to patients over twelve years of age. This article follows the COREQ (COnsolidated criteria for REporting Qualitative research) Checklist (Tong et al., 2007). Study setting The Hospital Napoleão Laureano (HNL) is located in João Pessoa, a city located in the northeast region of Brazil. Approximately 392,278 outpatient visits and 5,509 outpatients are performed annually at this location. The hospital has an operating room, clinical and pathological analysis laboratories, radiology, radiotherapy, chemotherapy services, adult and pediatric ICU, outpatient clinic and adult and pediatric wards. The pediatric ward has 21 beds and the pediatric ICU has six beds. Study participants The population consisted of all cancer patients receiving care in the pediatric sector of the hospital from April to October 2018, their parents/guardians. The research subjects included in the study were patients aged 4 to 19 years undergoing cancer treatment and their parents/guardians, who had to be present during the data collection period. Patients with compromised health status (n=9) or who did not consent to participate in the study (n=6) were excluded. The research sample consisted of 27 children/adolescents and 27 parents/guardians. Study procedures Individuals were approached and invited to participate in the study soon after hospital admission, or before the next chemotherapy cycle. The next day, they individually received an educational and preventive program in oral healthcare by a member of the research team (MEAS, female) who has extensive experience in caring oncopediatric patients at the hospital where the research was conducted. The participants were made known, prior to the study, that the researcher was a qualified dentist and a postgraduate student of health care. Patients over 12 years old and their parents/guardians were invited to watch an educational video lasting approximately seven minutes. The video content covered the following themes: cancer treatment modalities; repercussions of chemotherapy and radiotherapy in the oral cavity; main oral problems related to cancer treatment; etiology, prevention and treatment of oral mucositis; guidelines on oral hygiene and the use of fluoride. Patients over 12 years of age and their guardians also received inserts addressing the following topics: importance of dental monitoring of cancer patients; etiology, prevention and treatment of the main oral problems related to antineoplastic therapy; importance of oral hygiene; caries etiology and measures to prevent this disease; and oral hygiene guidelines. In the case of younger patients, oral health information was transmitted through storytelling and the use of playful instruments. The patients received a diary of oral care practices, in which they (or the person responsible) recorded their oral hygiene procedures performed each week, and a gold star was attached to the diary for each day that toothbrushing and tongue cleaning were performed at least twice. In addition, a card related to the patient’s performance was filled with face stickers corresponding to the patient’s conduct in that week. Oral hygiene orientation (OHO) was performed directly with each patient. Children and adolescents were asked to demonstrate how they normally brush their teeth and then they were guided on the correct brushing technique through simulation on a model. Afterwards, they were invited to practice the technique on the model, and finally they performed it in front of the mirror. We opted for the modified Bass brushing technique (Bass, 1954) and the recommended amount of dentifrice followed the American Dental Association criteria for each age group (American Dental Association Council on Scientific Affairs, 2014). The OHO was reinforced every two weeks, and each week the patients received a diploma with a grade on their oral hygiene performance. Patients also received gold or silver medals according to their behavior in relation to the oral health program. Soft brushes (manual children’s brush with soft bristles - Colgate ® ) and fluoridated toothpaste (1,100 ppm of fluoride, Colgate ® Maximum Anticavity Protection plus Neutraçucar ® ) were provided to the patients in a standardized way during the study. Data collection took place in two stages with semi-structured interviews being conducted with patients and guardians 15 and 30 days after beginning the program, with the aim of providing knowledge of the changes implemented by it. Interviews were conducted by the same trained research (MEAS) who has a relationship established prior to interviews. However, it was made clear that she was conducting this study as an impartial researcher. The interviewer had experience in qualitative research methods and in working with childhood cancer patients in pediatric units. Interviews were audio-recorded and contained four open questions related to the perception of patients and guardians in relation to the program ( ). Interviews lasted approximately five minutes and patients under 6 years old were exempted from responding. Transcripts were not returned to participants for review and no repeat interviews were conducted. Analyses The audio recording of interviews was transcribed, and one author (MEAS) coded the data. The speeches of interviewees were identified by the letter P (patient) or letter G (guardian) and the number of realizations of order, such as P1, P2, G1, G2 and beyond. The Discourse of the Collective Subject (DCS) analysis started with a thorough reading of the transcripts to identify recurring ideas, words, or phrases, and belief patterns were identified and each of the core ideas and their corresponding key expressions were extracted. Thus, the key expressions (KE) for each of the groups (patients and parents/guardians) were initially identified, and then the central ideas (CI) were extracted. Finally, with the sum of key expressions and central ideas, the synthetic discourses representing the collective subject discourse were constructed (Lefevre and Lefevre, 2005).
Characteristics of the sample The study included 27 children and adolescents assisted in the Pediatric Oncology sector of Hospital Napoleão Laureano and 27 parents/guardians. The mean age of patients was 9.41 (±4.49) years. shows the demographic and clinical characteristics of the children and adolescents who participated in the study. Perception of patients and guardians in relation to OHEPP From analyzing the interviews of patients in the periods of 15 and 30 days, the categories which emerged and the speeches representing these categories can be seen in . Patients were asked in the second question during the interviews about how much they thought their oral health habits had changed from implementing the program. The answers could range from one to five, where 1 meant it did not change anything, and 5 meant it changed a lot. Is was observed that 47.1% (n=8) of the patients classified the change as 5, both in the 15-day period and for the 30-day period and 35.6% (n=6) of them classified the change as 4, while 41.2% (n=7) gave the change a grade of 4 for the 30-day period. The patients’ reports for the two evaluated periods suggest the positive impact of the OHEPP on oral hygiene habits. Changes in brushing frequency and technique were mentioned, as well as the recreational aspect of the program for children and adolescents. Thus, some main themes emerged following the analysis and illustrates the categories and reports of the guardians for the periods of 15 and 30 days. Parents/guardians also considered the positive impact of the program, both on the patients’ oral hygiene habits and on the improvement in oral mucositis. They mentioned that the patients were more stimulated and cooperative after the playful activities were developed, and they themselves were more informed and attentive to the prevention of oral mucositis and other health problems. When caregivers were asked about the perceived change in the patient’s oral health habits (question 2) within 15 days, 59.3% (n=16) of them classified the change as 5, while 60.0% (n=15) gave the change a grade of 5 for the 30-day period. Moreover, 29.6% (n=8) and 32.0% (n=8) caregivers classified the change as 4 for the 15-day and 30-day periods, respectively.
This study reveals that children and adolescents undergoing anticancer treatment need to more carefully look at oral healthcare, seeking individualized care. Strategies based on dialogue and welcoming professional attitudes are essential in oral healthcare from the perspective of expanding the knowledge of patients and their parents/guardians about possible changes in the oral cavity resulting from cancer therapy. Our study showed that the educational and preventive program implemented favored communication between the oral health team and the research participants, health professionals, patients and their parents/guardians. We emphasize that this approach is still little explored in the field of pediatric oncology. Regarding the demographic and clinical characteristics of the study participants, it was found that patients with hematologic malignancies and who lived outside the capital/metropolitan area predominated. It is important to highlight that hematological neoplasms can have several consequences for the oral cavity, such as ulcers, petechiae, trismus, oral infections, bleeding and gingival hyperplasia (Francisconi et al., 2016). Leukemic cells have the ability to invade periodontal tissues, causing swelling, bleeding and gingival inflammation, characterizing the picture of leukemic infiltration (Mazaheri et al., 2017). These gingival changes cause fear in patients and their caregivers in relation to oral hygiene, and can be neglected (Cheng, 2009). Another point to be highlighted is that patients living in locations far from the capitals face challenges for cancer treatment, including financial and transportation barriers. The large distances traveled to treatment centers can make traveling to specialized treatment centers a stressful process for patients and their guardians (Carneiro et al., 2015). The relationship with family and friends can also be affected by the physical distance imposed by cancer treatment, generating a psychological impact on patients and their families (Bakula et al., 2019). These emotional changes can impact self-care, pointing to the importance of instituting a program which encourages oral care during antineoplastic therapy. Cancer treatment obviously imposes a change in the patients’ lifestyle, generating restrictions in several aspects, and interfering with the child’s most routine activities such as playing (França et al., 2018). The recreational activities carried out in the hospital environment are described by pediatric cancer patients and their caregivers as something positive, an important incentive and complement to regular care (Ljungman et al., 2016). The lack of recreational activities at the cancer hospital is not only a frequent complaint from patients, but also from caregivers (Cheng, 2009; Jibb et al., 2018). From this perspective, the playful aspect of the program developed in this study facilitates involving children and adolescents in the proposed activities, giving them the opportunity to play while assimilating the topics covered. Patients mentioned that they felt better when adhering to the proposed oral care, also referring to the importance of the program for preventing oral mucositis and that they were even influencing their family members to adopt more adequate oral hygiene habits. Increased vigilance in relation to oral changes resulting from antineoplastic treatment is essential, and by maintaining oral care it is possible to minimize some oral complications and to provide comfort (Elad et al., 2018; Yavuz and Yılmaz, 2015). Early diagnosis and treatment of these complications can also prevent more severe conditions, with the potential to negatively affect cancer therapy (Ribeiro et al., 2021). Oral health interventions and education are able to reduce the severity of oral mucositis in children and adolescents undergoing cancer treatment (Yavuz and Yılmaz, 2015; Ribeiro et al., 2021). These data corroborate the guardians’ statements regarding the improvement in hygiene habits and in the patients’ oral mucositis condition after implementing the OHEPP. Moreover, we emphasize that this oral health program reduced the risk of developing oral mucositis in pediatric oncological patients (Bezerra et al., 2021). Pediatric cancer patients and their guardians often show signs of stress and psychological changes after diagnosis (Bakula et al., 2019) and these children and adolescents may demonstrate resistance to oral care (Cheng, 2009). In this logic, an educational and preventive oral health program becomes relevant as it encourages adopting good hygiene habits and oral health surveillance, preventing patients and guardians from neglecting these issues due to difficulties related to cancer treatment. A qualitative approach study reported that children, despite being aware of the importance of oral care, sometimes neglected oral hygiene due to the discomfort associated with oral mucositis (Cheng, 2009). Parents described the oral hygiene moment as being stressful for their children and for themselves. Guardians also stated that the health team should carry out more activities to distract children from their oral mucositis. In addition, they mentioned the need for more information regarding this disease. Regarding this aspect, previous studies demonstrate that guardians and patients themselves refer to the need for more accurate information about cancer treatment and its adverse effects. The lack of adequate communication between the medical team and the family can be stressful and traumatic (Lyu et al., 2019; Robertson et al., 2019). Thus, the OHEPP has a relevant informative character, as it clarifies the main oral manifestations related to antineoplastic therapy and the prevention and treatment forms, thus enabling both patients and caregivers to feel more enlightened and secure in relation to the course of treatment. Proving this aspect, the OHEPP was effective in reducing the incidence of oral mucositis in these in pediatric cancer patients (Bezerra et al., 2021). It is important to emphasize that comparisons of the results of this study with those found in the literature were limited due to the scarcity of studies which assess the perception of patients and guardians about oral health programs implemented in the hospital environment. The few studies which have proposed to investigate the subject only assess the impact of oral health programs in a quantitative way, without considering the speeches of patients and their parents/guardians. Considering the difficulty in measuring values and opinions quantitatively, this work demonstrates its relevance and uniqueness in analyzing the perspective of children and adolescents with cancer and their parents/guardians on an educational and preventive oral health program, exposing their expectations and propositions in relation to the work developed. Although our study produced relevant data, it had some limitations. One limitation is related to the restricted context of data collection and the cultural and geographic specificities of the participants and the study setting. It should also be considered that the young age of some patients and the low level of education of some parents/guardians may have limited their ability to answer certain questions more clearly. However, the results of this study are relevant and represent an important source of information to guide developing and implementing oral care programs for children and adolescents undergoing cancer treatment, which may reflect in improved quality of life and have a positive impact in cancer therapy, reducing situations related to the increase in hospitalization time and an increase in treatment costs. It is concluded that the OHEPP helped to improve knowledge and attitudes related to oral health, favored adopting and/or increasing the frequency of oral hygiene habits and provided increased surveillance in relation to the appearance of oral changes resulting from antineoplastic treatment for pediatric cancer patients and their parents/guardians.
Conceptualization: M.E.A.S., A.M.G.V., B.M.S., I.L.A.R.; Methodology: M.E.A.S., A.M.G.V., B.M.S., I.L.A.R.; Formal analysis and investigation: M.E.A.S., A.M.G.V., B.M.S., I.L.A.R.; Writing - original draft preparation: M.E.A.S., A.M.G.V., B.M.S., I.L.A.R.; Funding acquisition: M.E.A.S.; Supervision: A.M.G.V., B.M.S.
|
Does access to a portable ophthalmoscope improve skill acquisition in direct ophthalmoscopy? A method comparison study in undergraduate medical education | f09d99f1-15e0-4d66-946e-c4ef11d450c0 | 6567496 | Ophthalmology[mh] | Direct ophthalmoscopy is an essential skill for medical graduates as outlined by the General Medical Council (GMC) and supported by the Royal College of Ophthalmologists. Specific ophthalmic problems are estimated to make up approximately 1.46–6% of UK Emergency Department attendances and 1.5% of GP consultations. Timely and accurate DO can be life-saving in some patients, for example in recognising papilloedema. DO is also required in the management of chronic multi-system diseases such as diabetes mellitus and hypertension. Despite the importance of and frequent need to perform DO, there are multiple barriers to learning this skill at an undergraduate level. Ophthalmology is not a compulsory clinical attachment for all UK medical schools and consequently some students graduate without any ophthalmoscopy exposure. Limited dedicated ophthalmic curricula time is a common finding globally affecting medical schools in both high and low resource countries. Perhaps unsurprisingly, cross-sectional studies highlight that medical students’ self-reported confidence in DO can be low. These findings are continued after graduation, with UK studies of Foundation Year and ED doctors highlighting that the majority lack confidence using an ophthalmoscope correctly and in identifying pathology. Another barrier to students to learning DO is limited assessments. Objective assessment of DO is difficult due to the inherent challenge that examiners cannot easily determine how well students can view a subject’s fundus. Assessment drives learning behaviour and time-pressured medical students will inevitably prioritise knowledge and skills that they will be assessed on. A 2011 survey of UK medical schools highlighted that only 38% undertook formal assessment of students’ ophthalmoscopy skills. Assessments and simulation models used may lack both objectivity and validity. Device access may be a major barrier to improving frequency of DO performance and associated skill acquisition. Most UK medical students do not own a direct ophthalmoscope or have easy access to a functioning device on hospital placements. Ownership of ophthalmoscopes amongst students fell dramatically following removal of equipment grants in 1986. Subsequent students have therefore entered a learning environment where the norm is not to have their own device. The cost of a traditional direct ophthalmoscope (TDO) such as a Keeler standard model is around £220 and considered prohibitively expensive to most undergraduates. Availability of ophthalmoscopes in hospital attachments is recognised to be limited. This is multi-factorial: NHS procurement can lack consistency in which models are purchased and ward staff may not provide ongoing maintenance leading to non-functioning devices due to burst bulbs or flat batteries. These issues present further challenges to skill mastery. The Arclight (AO) is a device that offers promise in overcoming these barriers. It is a highly portable (11 cm long and weighing 18 g) solar powered, LED illuminated ophthalmoscope. In the UK it costs approximately £50, a significant reduction compared to TDOs. (Fig. ). Despite its low cost, previous studies have shown it to be as good as TDOs with the majority of users finding it easier to use. [ – ] Consequently, the aim of this study is to assess the impact of personal ownership of a portable ophthalmoscope (AO) on DO skill acquisition and competency amongst medical students in the clinical environment compared to a control group with typical access to TDOs.
Design We used a mixed methods design, primarily in the form of a method comparison study supported by a qualitative survey. Ethical approval was granted by the University of Birmingham ethics board in October 2016 (Refence: ERN_16–1021). Setting and participants The study was performed amongst fourth year MBChB medical students at the University of Birmingham during the period November 2016 to April 2017. Participants were all undertaking their 18-week Specialty Medicine (SPM) hospital placement. SPM is a mandatory clinical attachment which involves rotation through different specialities including one to two weeks of ophthalmology. Students are randomly allocated between eight different hospitals across the West Midlands. Recruitment and randomisation All 178 4th year medical students undertaking the SPM placement at the time of study recruitment were invited to participate via email. Students were offered a free AO for taking part in the study. The only additional eligibility criterion applied was that students were required to have a refractive error between -6D and + 4D to participate. This was to match the capacity of the AO to correct refractive error and is in keeping with previous studies. A total 42 students (24% response rate) were successfully recruited and individually randomised by the primary investigator (PI) using computerised random numbers to either the control or intervention arms. Three objective DO competency assessments were planned before the students started their 18-week SPM placement and were to be repeated at the end. The students in the intervention arm were given an AO to use throughout the study period and keep afterwards. Students in the control arm received their AO at the end of the study. All participants also then received individualised feedback in the form of their raw assessment scores. These were not graded or linked with any assessments within the MBChB programme. Control Students randomised to the control group used TDOs during both the pre and post clinical attachment assessments. During their clinical attachment they only used the TDOs typically available in the hospitals of their SPM placements. Intervention Students randomised to the intervention group all used their own personal AO during both assessments and their SPM placements. Students could replace lost or broken AOs by contacting the PI. Assessments Three primary assessments of DO competency were performed on all participants at the study beginning and end: judgement of vertical cup disc ratio (VCDR), fundus multiple choice questions (F-MCQs) and model slide regional examination (MSE). VCDR, F-MCQ and EOU all necessitated performing ophthalmoscopy on other study participants, while MSE consisted of examining pre-generated fundal images on 35 mm slides in eye models. Students were all emailed information about the DO devices they would be using and the different assessments 2 weeks before the baseline assessment. No information was given on how to perform DO and no teaching was delivered on the day of assessments. Students were given ten minutes to familiarise themselves with their allocated device prior to the baseline assessments. Students also self-assessed their examination competence for each ophthalmoscopy examination carried out on another study participant. This was via an ‘Ease of Use’ (EOU) scale used in a previous study, which ranged from 1 (‘Couldn’t use this ophthalmoscope’) to 8 (‘Determined a cup: disc ratio with a low level of difficulty). This scale is included in Appendix 1. Model slide regional examination (MSE) This assessment used fundus photo slides annotated with letters of various font sizes printed in different positions on the retina and placed within a mannequin (Eye Retinopathy Trainer®, Adam, Rouilly Co., Sittingbourne, UK). Each participant examined six model eyes each with six letters in the same pre-defined retinal locations but with reducing font sizes. Scores were calculated as a percentage total of the correct answers. Fundus photography After recruitment, all participating students had fundus photographs taken of both their eyes by the PI using a Topcon® retinal fundus camera. These photographs were cropped to illustrate the optic nerve in the centre of an image with a one disc diameter surrounding area of retina and used to generate the F-MCQs. Fundus multiple choice questions (F-MCQs) F-MCQ assessment sessions required every student to perform ophthalmoscopy on every other student. The examining student was required to identify the optic nerve of the student being examined. Specifically, each student had two F-MCQs (one for each eye) each with four images: their previously acquired optic nerve head image and three non-matching distractors from other participating students. See Fig. for an example. One mark was awarded for a correct match and zero for an incorrect match. Vertical cup to disc ratio (VCDR) Participants were requested to assess and record the VCDR of each eye examined. Three ophthalmic specialists (AB, RB and PIM) provided VCDR assessments for all optic nerve head images from the participants. The mean of these assessments was used to form the ‘gold standard’ from which participant results were compared. Students scored a mean magnitude error based on the comparison of each of their assessments to the gold standard. Electronic logbook Students kept an electronic logbook (e-logbook) of all DO examinations they performed during their 18-week placement including EOU scores. Students coded this data during placement using a simple online application accessible via smart phones. Participants were contacted by email at six points during the study period and reminded to code examinations. Statistical analysis Quantitative data was analysed according to a per protocol principle using the software SPSS Statistics (Version 24, IBM®). Comparison of baseline characteristics, including gender, refractive error and hospital placement was undertaken using Chi-squared test and Fisher’s exact test. Median/mean differences in DO competency were compared using the Wilcoxon Signed Rank Test for non-parametric and the Paired Samples t-test for parametric data. Intra class coefficients (ICCs) were used to measure the agreement of assessments in performance ranking participants. Correlations between performance and other independent factors were analysed using Spearman’s Rank Test.
We used a mixed methods design, primarily in the form of a method comparison study supported by a qualitative survey. Ethical approval was granted by the University of Birmingham ethics board in October 2016 (Refence: ERN_16–1021).
The study was performed amongst fourth year MBChB medical students at the University of Birmingham during the period November 2016 to April 2017. Participants were all undertaking their 18-week Specialty Medicine (SPM) hospital placement. SPM is a mandatory clinical attachment which involves rotation through different specialities including one to two weeks of ophthalmology. Students are randomly allocated between eight different hospitals across the West Midlands.
All 178 4th year medical students undertaking the SPM placement at the time of study recruitment were invited to participate via email. Students were offered a free AO for taking part in the study. The only additional eligibility criterion applied was that students were required to have a refractive error between -6D and + 4D to participate. This was to match the capacity of the AO to correct refractive error and is in keeping with previous studies. A total 42 students (24% response rate) were successfully recruited and individually randomised by the primary investigator (PI) using computerised random numbers to either the control or intervention arms. Three objective DO competency assessments were planned before the students started their 18-week SPM placement and were to be repeated at the end. The students in the intervention arm were given an AO to use throughout the study period and keep afterwards. Students in the control arm received their AO at the end of the study. All participants also then received individualised feedback in the form of their raw assessment scores. These were not graded or linked with any assessments within the MBChB programme.
Students randomised to the control group used TDOs during both the pre and post clinical attachment assessments. During their clinical attachment they only used the TDOs typically available in the hospitals of their SPM placements.
Students randomised to the intervention group all used their own personal AO during both assessments and their SPM placements. Students could replace lost or broken AOs by contacting the PI.
Three primary assessments of DO competency were performed on all participants at the study beginning and end: judgement of vertical cup disc ratio (VCDR), fundus multiple choice questions (F-MCQs) and model slide regional examination (MSE). VCDR, F-MCQ and EOU all necessitated performing ophthalmoscopy on other study participants, while MSE consisted of examining pre-generated fundal images on 35 mm slides in eye models. Students were all emailed information about the DO devices they would be using and the different assessments 2 weeks before the baseline assessment. No information was given on how to perform DO and no teaching was delivered on the day of assessments. Students were given ten minutes to familiarise themselves with their allocated device prior to the baseline assessments. Students also self-assessed their examination competence for each ophthalmoscopy examination carried out on another study participant. This was via an ‘Ease of Use’ (EOU) scale used in a previous study, which ranged from 1 (‘Couldn’t use this ophthalmoscope’) to 8 (‘Determined a cup: disc ratio with a low level of difficulty). This scale is included in Appendix 1.
This assessment used fundus photo slides annotated with letters of various font sizes printed in different positions on the retina and placed within a mannequin (Eye Retinopathy Trainer®, Adam, Rouilly Co., Sittingbourne, UK). Each participant examined six model eyes each with six letters in the same pre-defined retinal locations but with reducing font sizes. Scores were calculated as a percentage total of the correct answers.
After recruitment, all participating students had fundus photographs taken of both their eyes by the PI using a Topcon® retinal fundus camera. These photographs were cropped to illustrate the optic nerve in the centre of an image with a one disc diameter surrounding area of retina and used to generate the F-MCQs.
F-MCQ assessment sessions required every student to perform ophthalmoscopy on every other student. The examining student was required to identify the optic nerve of the student being examined. Specifically, each student had two F-MCQs (one for each eye) each with four images: their previously acquired optic nerve head image and three non-matching distractors from other participating students. See Fig. for an example. One mark was awarded for a correct match and zero for an incorrect match.
Participants were requested to assess and record the VCDR of each eye examined. Three ophthalmic specialists (AB, RB and PIM) provided VCDR assessments for all optic nerve head images from the participants. The mean of these assessments was used to form the ‘gold standard’ from which participant results were compared. Students scored a mean magnitude error based on the comparison of each of their assessments to the gold standard.
Students kept an electronic logbook (e-logbook) of all DO examinations they performed during their 18-week placement including EOU scores. Students coded this data during placement using a simple online application accessible via smart phones. Participants were contacted by email at six points during the study period and reminded to code examinations.
Quantitative data was analysed according to a per protocol principle using the software SPSS Statistics (Version 24, IBM®). Comparison of baseline characteristics, including gender, refractive error and hospital placement was undertaken using Chi-squared test and Fisher’s exact test. Median/mean differences in DO competency were compared using the Wilcoxon Signed Rank Test for non-parametric and the Paired Samples t-test for parametric data. Intra class coefficients (ICCs) were used to measure the agreement of assessments in performance ranking participants. Correlations between performance and other independent factors were analysed using Spearman’s Rank Test.
A total of 38 students (21% of cohort) completed the study (Fig. ). Comparison of baseline characteristics including gender, refractive error and hospital placement demonstrated no statistically significant difference between the groups (Appendix 2). The e-logbook demonstrated no difference in the median number of examinations performed by the AO group compared to control (6.0 vs 6.0) (Table ). The greater mean number of examinations performed by the AO group vs control (9.6 vs 7.0, p = 0.41) was due to a small minority ( n = 3) of students in the AO performing large numbers of examinations. There was a minor reduction in the magnitude of VCDR judgement error in both groups; intervention − 0.12 (CI − 0.18 to − 0.05) vs control − 0.08 (CI − 0.15 to − 0.02) (Table ). Both groups performed worse in the end assessments compared to their baseline assessments in F-MCQ and MSE assessments; intervention − 16.7 (IQR − 18.7 to 10.4, p < 0.01) vs control − 7.1 (IQR − 21.4 to − 1.8, p < 0.01) and intervention − 12.5 (IQR − 25 to 0, p < 0.01) vs control − 12.5 (IQR − 25 to − 12.5, p < 0.01) respectively. There was no statistically significant difference between these assessed competency changes (VCDR p = 0.561, MCQ p = 0.872, Model p = 0.772). The AO group demonstrated statistically significant increased EOU scores of 0.24 (CI 0.08 to 0.39) vs control 0.04 (− 0.14 to 0.24). Notably the AO also performed better at the F-MCQ assessments at baseline 58.3% vs control 42.9% ( p = 0.013) and at final 45.8% vs control 35.7% ( p = 0.043). There was no difference in scores between groups across the other assessment modalities. ICCs demonstrated no significant performance rank correlation between the assessments (VCDR/MSE 0.124, VCDR/F-MCQ -0.111, MSE/F-MCQ 0.096).
The key finding from our study was the low numbers of DO examinations performed by both groups; median of six during the 18-week clinical attachment which included 1 to 2 weeks of ophthalmology. The low number of examinations is particularly striking given that participants self-selected for study involvement, knew they were being observed and the intervention group were given free portable ophthalmoscopes. Students may have simply failed to record examinations, although this seems unlikely given the potentially positive effect of observer bias and easy access to the smartphone-based e-logbook. A limitation of studies in this field of research is a lack of a validated objective measure of ophthalmoscopy skills at an undergraduate level. We chose a range of assessments to provide an overview of student performance in an attempt to overcome this. VCDR and EOU scoring , F-MCQ and MSE have all been used in similar studies before but not directly compared or formally validated for assessing competence. Similar competency results were observed between intervention and control groups across all three assessments. Both groups demonstrated a minor improvement in VCDR judgement but a reduction in F-MCQ and MSE performance. Students generally found VCDR assessment challenging, which is not surprising given the significant assessment variation even amongst ophthalmic specialists. Given the lack of correlation with number of examinations, the minor improvement seen in VCDR judgement was likely due to general ophthalmology placement experiences or personal study rather than DO practice. (Appendix 3) Our results suggest F-MCQs may show promise going forwards as they were the only assessment modality to positively correlate with the number of examinations performed. (Appendix 3). Anecdotally, students reported finding the second set of MSE slides harder to visualise. This was confirmed by the PI and was likely due to variation in the print quality or letter type. The reduction in performance scores in the final MSE assessments may have been in part due to this. This was not the case for F-MCQs as the same questions were used at both baseline and final assessment. MSE has inherent limited construct validity and in our study appeared to be affected by variances in difficulty. There was also a significant correlation with refractive error i.e. students with greater refractive error performed worse in MSE assessments than their peers, which suggests this is a source of performance bias for MSE. VCDR, F-MCQ and MSE appeared to be testing different aspects of DO competency. This is supported by a lack of significant intra class coefficient (ICC) between any of the assessments. Further research is required to develop a fit for purpose objective measure for DO competency at the undergraduate level. The AO may provide some performance advantage over traditional models. Despite the lack of impact of the AO on number of examinations and DO skill acquisition, our study confirmed non-inferior performance of the AO versus TDO in 2 of the 3 objectively assessed modalities and higher F-MCQs scores at both the baseline (58.3% vs 42.9%) and final assessments (45.8% vs 35.7%). Furthermore, there was a statistically significant increase in self-assessed EOU score for students using the AO. Further research should aim to explore students’ attitudes towards and experience of practising ophthalmoscopy to help identify what barriers to DO skill acquisition are present at an undergraduate level and how to address these. One factor not addressed by this study is clinical supervision and availability of experienced supervisors. Junior doctors often provide frontline clinical teaching but if they lack confidence in their own ophthalmoscopy skills this may lead to a reluctance to support and guide students. Strengths and limitations The strengths of this study were randomising students into a control group and intervention group, use of novel technology and collection of longitudinal data on clinical attachment combined with assessment data. We acknowledge the following limitations: This research was carried out at one institution only so will reflect the curriculum and clinical experience available. The study may be underpowered due to a relatively small analysed sample size ( n = 38). Without any similar previous or pilot studies it was not possible to perform a reliable power calculation. Due to the nature of the intervention, it was not possible to mask either the educators or students to which device was being used by each group. Students’ ophthalmology week took place during any one of the 18 weeks of SPM attachment and we did not record when this took place for individual students. To what degree the timing of this week affected results is unknown. For example, students who had their ophthalmology week first may have been more confident performing ophthalmoscopy in the rest of the block and vice versa. The assessment measures lacked validation, particularly the F-MCQs. For each F-MCQ distractor images were picked to provide contrast for example different vasculature or VCDR but this limited standardisation and questions may have varied in difficulty. E-logbook data was self-reported. Students may have under-reported or entered false examinations.
The strengths of this study were randomising students into a control group and intervention group, use of novel technology and collection of longitudinal data on clinical attachment combined with assessment data. We acknowledge the following limitations: This research was carried out at one institution only so will reflect the curriculum and clinical experience available. The study may be underpowered due to a relatively small analysed sample size ( n = 38). Without any similar previous or pilot studies it was not possible to perform a reliable power calculation. Due to the nature of the intervention, it was not possible to mask either the educators or students to which device was being used by each group. Students’ ophthalmology week took place during any one of the 18 weeks of SPM attachment and we did not record when this took place for individual students. To what degree the timing of this week affected results is unknown. For example, students who had their ophthalmology week first may have been more confident performing ophthalmoscopy in the rest of the block and vice versa. The assessment measures lacked validation, particularly the F-MCQs. For each F-MCQ distractor images were picked to provide contrast for example different vasculature or VCDR but this limited standardisation and questions may have varied in difficulty. E-logbook data was self-reported. Students may have under-reported or entered false examinations.
In our study, personal ownership of a portable ophthalmoscope offered limited advantage over traditional models. Students did not practice DO frequently, even with access to their own portable device. This was reflected in a lack of any meaningful improvement in DO skill over the study period. The AO represents a suitable alternative to more expensive traditional devices, but our results suggest changing student engagement with ophthalmoscopy will require a more wide-ranging approach than improving device access alone.
|
Comparative Physical Study of Three Pharmaceutically
Active Benzodiazepine Derivatives: Crystalline versus Amorphous State
and Crystallization Tendency | c8539220-7eaa-478e-a0c3-ec0c4e058b5b | 8594866 | Pharmacology[mh] | Introduction The chemical modification of active pharmaceutical ingredients
(APIs) is one of the main strategies to identify better drugs with
reduced side effects and increased efficacy or bioavailability. A
historical example is that of the active ingredient of aspirin: derivatization
of salicylic acid, the active principle present in willow barks, into
acetylsalicylic acid leads to substantial reduction of the side effects
of the naturally occurring drug. Given
that low solubility in water and thus low oral bioavailability is
one of the main issues in current drug research, chemical derivatization
of APIs in the form, e.g., of hydrochloride salts with enhanced solubility
is often pursued. , Another related strategy for efficient
drug administration is the development of a prodrug, i.e., an inactive
compound (usually a derivative of an active drug) that undergoes in vivo transformation, through enzymes or metabolic processes,
into the active parent drug. This strategy has been applied successfully
to improve the pharmacokinetic properties of drugs since the middle
of the last century, when the term prodrug was first introduced. Nowadays, prodrugs make almost 10% of the administered
drugs, reaching a peak of 20% of the market between 2000 and 2008. , While chemical derivatization is mainly aimed at identifying
drugs
with better biochemical properties, it also obviously affects the
physical properties of the parent API. In the vast majority of cases,
the induced changes in physical properties stem from relatively minor
chemical changes, as the derivative (prodrug, salt, etc.) is usually
one or two metabolic steps away from the active parent drug. The chemical modification may, for example, determine
a modified crystal structure of the resulting drug and have an impact
also on the possible polymorphism and relative stability of different
crystalline forms, which is of relevance for API storage prior to
industrial processing. These aspects are extremely important for the
pharmaceutical industry, as polymorphism or the possible stability
of an amorphous (glass and supercooled liquid) phase can have a strong
impact on the viable protocols for the preparation of suitable formulations
for the administration of APIs. , Drug derivatization
also affects the glass transition temperature
and the kinetic stability of the amorphous form of the drug. It is
well-known that amorphous pharmaceuticals have better dissolution
and thus better bioavailability properties than their crystalline
counterparts, , and a few amorphous drugs have
appeared on the market in recent years. , The amorphous
form of a drug may be present in a formulation as a result of industrial
processing via, e.g., milling and spray or freeze drying. − Despite their advantage in terms of solubility, however, amorphous
drugs are not thermodynamically stable and are thus prone to recrystallization
into the lower-solubility crystalline form. , − A better understanding of the amorphous state is
needed to advance in the formulation of amorphous drugs. In the context
of drug modification strategies, it would be extremely useful to be
able to predict how different drug derivatives behave in terms of
kinetic stability and tendency toward recrystallization of the amorphous
form, both in the case of amorphous API phases formed spontaneously
or purposefully during formulation of a medicament. The present paper
takes a step in this direction by comparing the physical properties
of the amorphous and crystalline forms of three distinct pharmacologically
active benzodiazepines, with the aim of exploring possible routes
to increase the kinetic stability of amorphous derivatives. The common molecular structure of the benzodiazepine drugs consists
of a rigid benzene ring and a flexible diazepine ring fused together.
Several benzodiazepines also display a third six-membered ring covalently
attached to a carbon atom of the diazepine ring (see, e.g., the molecular
structures displayed as insets to ). These drugs work by enhancing the effect
of the gamma - aminobutyric acid neurotransmitter,
and they have sedative, hypnotic, anxiolytic, anticonvulsant, and
muscle relaxant properties. According to a WHO report of 2017, 322
million people suffer from depression as of 2015 and almost as many
suffer from other anxiety disorders and
it is estimated that 40% of patients with depressive and anxiety disorders
are prescribed benzodiazepines. Oral
administration is the most common route of administration of benzodiazepines
(although injectable, inhalation, and rectal forms are also available),
but, given that they are lipophilic drugs, problems of low solubility
and bioavailability may arise in the gastrointestinal tract. , Low bioavailability may result in the need of a higher dose administered
to the patient, to account for the percentage that is not absorbed
and metabolized. This may lead to undesirable adverse side effects,
which are already pretty severe with high doses of this type of drugs. Here, we study three related benzodiazepine derivatives: Diazepam,
Nordazepam (also known as Nordiazepam or desmethyldiazepam), and Tetrazepam.
Diazepam (see inset to a) is one of the best known benzodiazepines and was first
marketed as Valium. It is used as a treatment for various mental diseases,
but its primary use is for anxiety, states of agitation, or panic
attacks. Diazepam has been studied extensively in both crystalline
and amorphous states, sometimes in comparative studies with other
benzodiazepines. − Its main active metabolite is Nordazepam, whose chemical structure
differs from that of Diazepam only by the substitution of the methyl
group linked to the nitrogen 1 of the diazepine by a hydrogen atom
(see the inset to b). This difference, however, is highly significant in that
it confers the Nordazepam derivative the possibility of self-aggregation
via hydrogen bonding via the H-functionalization of the nitrogen atom.
Tetrazepam (inset to c) differs from Diazepam in that the benzene ring attached
to the carbon 5 of the diazepine ring is substituted by a cyclohexene
ring. It was marketed principally as a treatment for muscle spasms
and panic attacks but was suspended from the market across the European
Union in 2013, due to cutaneous toxicity. Our comparative study
of these three pharmaceutically active ingredients
encompasses both their crystalline and amorphous forms (supercooled
liquid and glass), as well as the transition between the supercooled
liquid phase to the crystalline one. We focus in particular on the
molecular conformations and intermolecular interactions in the crystal
phase, Hirshfeld surfaces, calorimetric properties, dynamic relaxations,
and recrystallization kinetics, the latter two measured by dielectric
spectroscopy. Our aim is to understand how the modifications in molecular
structure and the resulting intermolecular interactions affect the
crystal structure and molecular dynamics in the amorphous phase, as
well as the melting point, glass transition temperature, and tendency
toward recrystallization of the various derivatives, with the aim
of identifying possible structure–property correlations. The
study of molecular relaxation processes in diazepines is particularly
interesting due to the inherent flexibility of the seven-membered
diazepine ring, which leads to conformational diversity of the molecules
and therefore to the possible existence of a relaxational inter-conformer
conversion dynamics. To the best of our knowledge, only a few very
recent studies have focused on the interpretation of the dielectric
relaxation of flexible heterocyclic molecules. A further outcome of this work is therefore to expand the
current experimental knowledge of the conformational dynamics of flexible
cyclic or ring-containing molecules.
Materials
and Methods Tetrazepam (TETRA, hereinafter) is a powder of
medicinal grade
kindly supplied by Daiichi Sankyo France SAS. Samples of medicinal
grade Nordazepam (NOR) were kindly provided by Bouchara-Recordati
(France) and medicinal grade Diazepam (DIA) was kindly supplied by
Neuraxpharm (Spain). The powders of the three diazepines, with purities
higher than 99.5%, were used as received without further purification.
Differential scanning calorimetry (DSC) experiments were carried out
under a nitrogen atmosphere on samples loaded in pierced aluminum
pans, by means of a Q100 calorimeter from TA Instruments. Measurements
were performed using heating/cooling rates of 10 K min –1 and sample masses of the order of 5 mg, as determined with a microbalance
with 0.01 mg sensitivity. Powder X-ray diffraction patterns
have been acquired by means of
a vertically mounted INEL cylindrical position-sensitive detector
(CPS-120) using the Debye–Scherrer geometry and transmission
mode. Monochromatic Cu Kα 1 (λ = 1.54056 Å)
radiation was selected by means of a quartz monochromator. Cubic phase
Na 2 Ca 3 Al 2 F 4 was used for
external calibration. The analysis of the diffraction patterns (fitting
of diffraction peaks by means of the Materials Studio software ) was carried out using the published monoclinic
(P2 1 /c) structures of TETRA, DIA, and NOR. Hirshfeld surface analyses were performed by means of the CrystalExplorer
software ( https://crystalexplorer.scb.uwa.edu.au/ ). Broadband dielectric spectroscopy (BDS) measurements were
carried
on the amorphous form (supercooled liquid and glass states) of the
drugs, by means of a Novocontrol Alpha analyzer. The samples were
placed in a stainless steel parallel-plate capacitor specially designed
for the analysis of liquid samples, with the two electrodes kept at
a fixed distance by means of cylindrical silica spacers of 50 μm
diameter. Temperature control of the capacitor and thus of the sample
was achieved with a nitrogen-gas flow cryostat with a precision of
0.1 K. To obtain the amorphous form, the powders were initially melted
in the capacitor outside the cryostat, cooled at room temperature,
and melted again inside the cryostat. Each sample was then cooled
with a cooling rate of 10 K min –1 to 123 K to avoid
recrystallization, and isothermal spectra were then acquired every
2 or 5 K, waiting each time 5 min for temperature stabilization. Dielectric
spectra were measured in the frequency range between 10 –2 and 10 7 Hz, from 123 K up to the melting temperature
of each compound (404.1, 415.6, and 487 K, for Diazepam, Tetrazepam,
and Nordazepam, respectively). To obtain relaxation times and
quantify the changes in relaxation
dynamics, we employed the Grafity software to fit the dielectric spectra
as the sum of a power law representing the dc conductivity contribution,
modeled as a term of the form in the complex
permittivity, where s is an exponent close to unity,
and a Havriliak–Negami
(HN) function for each relaxation component. Overall, the spectra contained four different relaxation components
(referred to as α, β, γ, and γ′ in
the text), and the total complex permittivity was modeled as follows: 1 Here, ω
= 2πν
is the angular frequency, ε ∞ is the permittivity
in the high frequency limit, Δε i is the dielectric intensity (or relaxation strength) of relaxation i ( i = α, β, γ or γ′), a i and b i are parameters
describing the shape of the corresponding loss curves, and τ HN, i is a time parameter connected to the
characteristic relaxation time τ max, i , corresponding to the maximum loss of relaxation i . In terms of the fit parameters, τ max, i (which we will refer to as τ i in the following, for simplicity) is given by the following: 2 The shape parameters a and b can
vary between 0 and 1. Specific cases of the HN function are the Cole–Cole and Cole–Davidson functions, which are obtained for b =
1 and a = 1, respectively. In the case of the Cole–Cole
function, reduces
to τ i = τ HN, i . Throughout the text, we refer to τ max, i simply as the relaxation time, and use for it the
symbols τ or τ i to simplify
the notation. Most dielectric spectra displayed only two relaxations
in the accessible frequency window, namely, either the α and
β relaxations (near and above T g ) or else the intramolecular γ and γ′ relaxations
(well below T g , see ), so that our fit procedure
only involved at most two HN functions at the time. The (primary)
α relaxation turned out to be well described by a Cole–Davidson
function, while all secondary relaxations could be fitted with Cole–Cole
functions. This reduced significantly the actual number of free fit
parameters that had to be employed in each fit.
Results 3.1 Differential Scanning Calorimetry Results shows the
DSC traces obtained for the three diazepines DIA, NOR, and TETRA.
In all three cases, the as-received powders were completely crystalline,
as the first heating ramp only displayed a melting endotherm with
onsets at 404.1, 487.0, and 415.6 K for DIA, NOR, and TETRA, respectively.
Values coincide within the experimental error with those available
in the scientific literature. − , , The melting point of NOR and the enthalpy of melting are both significantly
higher than that of the other two derivatives, likely due to the presence
of N-H···O=C hydrogen bonds, which can only form in
demethylated derivative (see the next section). The subsequent
cooling ramp leads to a glassy phase for all three pharmaceuticals,
and on reheating, a step-like transition can be observed in the DSC
traces, corresponding to the glass transition temperature ( T g ). In most cases, though not in all DSC runs,
TETRA and NOR displayed (at least partial) recrystallization of the
supercooled liquid in the heat up run, followed again by the melting
peak (see inset to b). The recrystallized phase is the same as the initial one,
as the melting temperature is the same on heating the recrystallized
sample. The supercooled TETRA and NOR liquids were observed to crystallize
also in dielectric spectroscopy experiments (see ), while recrystallization
of DIA was absent also in this case. The sample geometry and the vessel
are quite different in DSC (droplet in aluminum pan) and dielectric
(film in stainless steel cylinder with silica spacers) experiments.
The fact that the three samples displayed the same tendency toward
recrystallization under such different experimental conditions indicates
that the recrystallization of TETRA and NOR probably took place by
homogeneous (rather than heterogeneous) nucleation of the crystal
phase. The characteristic onset temperatures of the glass transition,
recrystallization, and melting points are listed in for all three pharmaceutically
active compounds, together with the melting enthalpies. The recrystallization
temperature is only listed for completeness, as it did not always
occur in all DSC scans at the same temperature. This is not surprising,
as nucleation is a stochastic event that depends on the characteristics
of the sample (heterogeneous vs homogeneous nucleation) and its history
(e.g., cooling rate from the liquid phase, temperature at which it
is then kept). It may be seen that T m and T g roughly scale with one another:
the T g / T m ratio
is 0.78
for DIA, 0.71 for NOR, and 0.75 for TETRA. The values for TETRA and
DIA are quite similar, albeit T m is slightly
higher for TETRA than that for DIA, while T g is somewhat lower for TETRA than that for DIA. The glass transition
temperature is often found to display a correlation with the molecular
weight M w . In particular, the empirical
rule T g ≈ M w 1/2 appears to be fulfilled in the case of van
der Waals molecular liquids. Such correlation
probably reflects the fact that the extent of van der Waals interactions
increases with the molecular mass (due to the increase of molecular
polarizability and of the closest intermolecular contacts), and the
fact that, at a given fixed temperature, a massive molecule has lower
mobility, but it does not take into account hydrogen bonding or any
other type of directional intermolecular bonds. In fact, the glass
transition temperature of the studied diazepines does not correlate
with the molecular weight: NOR, which has the lowest weight, has the
highest glass transition temperature. The origin of the higher T g is likely the same as that of the higher T m , namely, the presence of intermolecular H-bonds
in the liquid phase of NOR. Indeed, in the absence of any H bonding
the aforementioned correlation of molecular weight and glass transition
temperature would result in a T g value
of NOR closer to those of DIA and TETRA, which is not observed. 3.2 X-ray Diffraction Results and Analysis All three compounds display, in the crystalline phase, the same monoclinic
space group (P 2 1 /c). The diazepine ring of all molecules
adopts a bent boat-like conformation, with two possible isoenergetic
conformers, which are mirror images of one another. The two conformers
have opposite chirality and are named P (plus) or M (minus) according
to the sign of the (O=)C–C(H 2 )–N=C
torsion angle (see the inset to ). All three crystals contain a 1:1 ratio of P and
M conformers. The geometry of the conformers is similar in all three
compounds. For example, the angle formed by the C=N bond with
the plane of the fused benzene ring is equal to 41.6, 38.5, and 48.6°
in crystalline DIA, NOR, and TETRA, respectively. The analysis
of the X-ray structures at room temperature shows unambiguously that
NOR is the only compound of the three related drugs studied that forms
strong hydrogen bonds in the crystalline state, namely, intermolecular
N–H···O bonds involving the amine nitrogen of
the diazepine ring and the carbonyl oxygen of the same group of a
nearest-neighbor molecule in the crystal structure (see ). This is in agreement with
the higher melting point and enthalpy of fusion of NOR compared with
the other two compounds . It is interesting to point out in this respect that while
in both crystalline DIA and TETRA the carbonyl group and the adjacent
methyl group are basically coplanar, with a H 3 C–N–C=O
torsion angle smaller than 2°, in the case of NOR, which is a priori the only compound where the corresponding (peptide)
moiety is expected to be planar due to the amide electronic resonance,
the H–N–C=O torsion angle is instead approximately
10°. Non-planar peptide bonds are not uncommon in H-bonded structures
such as proteins in their native state. In the case of crystalline NOR, the lack of planarity of the amide
group is likely a consequence of H-bond formation. A recent work by some of us has shown that DIA and
TETRA, while
not forming N–H···O bonds, display weak but
extensive C–H···O interactions between the electron-rich
carbonyl group and the weakly polar C–H bonds of CH 2 groups. While intermolecular N–H···O
bonds are at least partially present also in the amorphous state of
NOR, as testified by its much higher glass transition temperature
(see ),
it is unlikely that the C–H···O interactions
play any role in the amorphous state of the three compounds, as we
argue further in . A straightforward comparison of the hydrogen
bond scheme in the
solid state of the three compounds can be carried out based on the
analysis of the Hirshfeld surface areas (see ). This surface represents a particular way
of partitioning the overall electron density in a molecular crystal
into individual molecular units, which
provides a three-dimensional image of the close contacts in the crystal
by guaranteeing maximum proximity of the corresponding Hirshfeld volumes
of nearest-neighbor molecules. − The color code employed by convention
is that a yellow or red color indicates points of short intermolecular
contact, while blue indicates regions of the Hirshfeld surface corresponding
to directions in which the intermolecular distance is comparatively
longer. ,
adapted
from ref , shows
the key intermolecular contacts derived from the Hirshfeld surface
area analysis at room temperature in the crystalline state. It evidences
the relevance of the hydrogen bond scheme for these compounds and,
in particular, that of the O···H for NOR compared to
DIA and TETRA, in agreement with the role of the strong N–H···O
H–bond interaction in NOR. It is interesting to note that there is a correlation
between melting
point, density, and Hirshfeld surface and volume parameters . In particular,
the Hirshfeld molecular volume and surface and the Hirshfeld volume
normalized to molecular weight are the largest for TETRA, which has
the smallest density and the lowest T m of the three derivatives, and they are the smallest for NOR, which
has the largest density and highest T m . This correlation evidences the influence on the melting temperature
of the hydrogen bonds in crystalline NOR. We point out that
the correlation is instead not strictly verified
when considering the glass transition temperature of all derivatives,
as T g,DIA > T g,TETRA . However, as mentioned, the T g of NOR
is significantly higher than that of the other two compounds, which
is indicative of the presence of some H bonding also in the liquid
phase of this compound. Instead of tightly bound stable H-bonded dimers
in the liquid phase, only short-lived H bonds are expected to occur,
and it is likely that a given NOR molecule only takes part, at most,
in one H-bond at a time. 3.3 Broadband Dielectric Spectroscopy
Results In order to see in detail how the small difference
in molecular
formula as well as the relevance of the hydrogen-bond network between
the three studied benzodiazepines affects the molecular mobility and
conformational dynamics in the amorphous state, we carried out dielectric
spectroscopy experiments on all three compounds in their amorphous
states. shows
the dielectric loss function of the three compounds at few selected
temperatures, plotted against the frequency of the applied electric
field. 3.3.1 Structural Relaxation For all three
diazepines, the most intense loss peak is observed at high temperatures
, and corresponds
to the structural relaxation (or α relaxation) of the supercooled
liquid phase. Below the calorimetric glass transition temperature T g (at which τ α = 10 2 s), the peak frequency of the α relaxation lies outside
the experimental frequency window, and only the tail of the α
peak is observed. When the temperature is increased above T g , the onset of the cooperative relaxation dynamics
of the liquid phase is signaled by the appearance in the experimental
frequency window of the α peak maximum, which then shifts to
higher frequencies as the temperature is further increased. The intensity of the α loss feature of both DIA and NOR is
roughly constant above T g . Instead, recrystallization
upon heating can be clearly discerned in the series of loss spectra
in the case of TETRA. Indeed, at temperatures higher than 335 K the
dielectric intensity of the α peak of TETRA is observed to decrease
further and further as the amorphous fraction in the sample decreases
(the dielectric loss intensity is proportional to the number density
of molecules in the amorphous supercooled liquid state ). To analyze the relaxation dynamics of
the cooperative α relaxation
in detail, we fitted all dielectric spectra as the sum of several
Havriliak–Negami components (see ), each corresponding to a distinct relaxation,
in order to extract the temperature-dependent relaxation times ( , see the section). The fits are shown in along with experimental
data. We found in particular that the fit with Havriliak–Negami
curves resulted in a Cole–Davidson function for the structural
relaxation. It can be observed in that the α peak of each compound has
exactly
the same shape regardless of temperature: the isothermal spectra at
various temperatures could be superposed onto one another by rescaling
the frequency scale and the signal intensity to those of the loss
maximum. This master-curve scaling was employed in the fitting procedure,
by imposing the same Cole–Davidson (CD) exponent in all high-temperature
spectra of a given compound, as indicated for selected temperatures
in the three panels of . The CD exponent that best described the α peaks was
found to be b = 0.59 ± 0.03 for DIA and TETRA,
and b = 0.50 ± 0.02 for NOR. This result indicates
a slightly greater cooperativity for NOR with respect to DIA and TETRA, , possibly related to the presence of intermolecular H-bonds in NOR. shows the
α relaxation times of all three studied diazepines versus the
inverse temperature (Arrhenius plot). The α relaxation time
follows the Vogel–Fulcher–Tamman temperature-dependence
typical of cooperative structural relaxations: − 3 Here, τ 0 is
the characteristic time at infinite temperature, D is the fragility strength coefficient, and T 0 is the Vogel–Fulcher temperature. The so-called “kinetic”
or “dielectric” glass transition temperature T g of the sample is defined as the temperature
at which relaxation times reaches 100 s, i.e., where log 10 (τ α /[s]) = 2 (horizontal yellow line in a). The kinetic glass
transition temperatures are 312.6, 309.0, and 347.2 K for DIA, TETRA,
and NOR, respectively . These values are very similar to the ones found in DSC
(see ), as expected. It is interesting
to compare the dependence of the relaxation times
with the inverse temperature rescaled to T g (the so-called Angell plot), as shown in b. The reduced temperature T / T g is a measure of how far above or
deep into the glass state is a sample. Remarkably, we find that the
structural relaxation times of the three pharmaceuticals coincide
in the Angell plot, which means that despite the structural differences
and the almost 40 K of difference in T g (and even more in T m ), the supercooled
liquid of these pharmaceuticals behaves cooperatively in the same
way when the distance from T g is the same.
This result is reflected in the VFT parameters listed in (in particular, in
the similar value of the fragility strength coefficient D ), and it can also be seen in the values of the so-called fragility
index ( m p ) of the amorphous samples, which
is defined as: 4 The fragility
index is virtually the same, within the error, for
DIA, NOR, and TETRA. The fragility index has often been related to
the capacity of a sample to recrystallize when heated from the amorphous
to the liquid state. − This, however, is only an empirical generalization,
and the present case confirms that such empirical rule fails, given
the identical fragility of the three samples and their noticeable
difference in recrystallization behavior. Also, the apparent activation
energy at T g , i.e., the slope of the tangent
to the Arrhenius plot of the structural relaxation at the glass transition,
cannot be taken as a reliable predictor of the tendency toward nucleation:
in fact, this parameter is again virtually identical in the case of
DIA and TETRA (see ), which exhibit instead very distinct nucleation tendency. 3.3.2 Secondary Relaxations Besides the
α relaxation, three more secondary peaks were observed in the
loss spectra at higher frequency (or lower temperature) than the cooperative
loss , both
in the supercooled liquid and the glass states. One of the secondary
relaxations, which we label as β, can be observed in all three
cases as a high-frequency shoulder to the structural peak. Another
secondary peak (γ) is observed in the glass state of all three
compounds, i.e., at low temperatures. Finally, at the lowest temperatures
studied a third secondary peak (γ′) could be discerned
in DIA and TETRA. In the case of NOR, the loss intensity at frequencies
higher than that of the γ peak was very low, so that it would
appear that the γ′ relaxation was almost absent in this
compound. We have nonetheless performed a fit of this spectral region
for completeness. All secondary relaxations could be fitted with symmetric
Cole–Cole functions (see section). a displays the full Arrhenius relaxation
maps of DIA (half points), NOR (open points), and TETRA (solid points).
As visible in this figure, all secondary relaxations displayed a simply
activated dependence on temperature, described by the Arrhenius law: 5 where τ ∞ is the characteristic
time at very high (infinite) temperature (it
plays the same role as τ 0 in the VFT ), E a is the activation energy, and R is the universal
gas constant. The β relaxation of all
three compounds displayed a kink
at T ≈ T g ( b), where its activation
energy E a, β (proportional
to the slope in the Arrhenius or Angell plots) was found to change
discontinuously (it cannot be excluded that above T g , the activation energy of the β process is actually
slightly dependent on T ). This cross-over in the
temperature dependence is typical of the so-called Johari–Goldstein
(JG) secondary relaxation, a local whole-molecule relaxation that
is strongly correlated with the structural one and that is a feature
common to most glass formers. − It can be easily seen in a and a that the difference
in glass transition temperature is reflected both in the α and
β relaxations. In fact, at the same given temperature, both
α and β relaxation times are much longer for NOR than
for DIA or TETRA, corresponding to much slower molecular dynamics.
The analysis shown in b provides a means to further verify the JG character of the
β relaxation. In fact, the β relaxations of DIA, NOR,
and TETRA are observed to be virtually superposed in the Angell plot,
where the three compounds all display a kink at T g / T ≈ 1, and the β activation
energy below T g is virtually the same
(within the error) for all three compounds (see ). The fact that the (secondary) β
relaxation time scales with T g (which
as discussed in is actually related to the kinetic arrest of the α
relaxation) is typical of JG relaxations. The study of this type of relaxation is particularly relevant
for
amorphous drugs because several studies have brought forth the idea
that the kinetic stability of a molecular glass is correlated with
the secondary β relaxation. In particular, it has been argued
experimentally that a small-molecule glass is kinetically stable only
below the onset temperature of the JG relaxation, typically few tens
of degrees below T g . In the case of the diazepines, the relaxation time of the
β JG relaxation reaches the standard value of 100 s between
30 and 40 K below the T g of the compound.
In our experiments, NOR and TETRA displayed a tendency to recrystallize
above T g , while DIA did not. It should
be noted that the onset of the β relaxation is likely a minimal
requirement for recrystallization: in our experiments, supercooled
DIA was not observed to recrystallize during a period of few days
even above the onset of the α relaxation, i.e., above T g . − The main theoretical model
concerning the JG relaxation is the
Coupling Model (hereafter, CM). , The CM interprets the
JG relaxations as a local, non-cooperative whole-molecule process,
which acts as the “precursor” at shorter times of the
α relaxation. , The characteristic CM relaxation
times in the supercooled liquid state are given by the following approximated
equation, which should approximately equal the experimental JG relaxation
times: 6 Here, t c is the correlation time (usually
of the order of 2 ps) and n , called the coupling
parameter, is related to the Havriliak–Negami exponents of
the α relaxation by the approximate relation 1 – n = ( ab ) 1/1.23 . In the case of the studied diazepines, the Havriliak–Negami
function reduces to a Cole–Davidson equation with a single
exponent b , which is found to be independent of temperature,
so that the coupling parameter is constant and equal to n = 1 – ( b ) 1/1.23 . then predicts that the β
relaxation time is perfectly correlated with the structural relaxation
time and thus scales with T g , as indeed
observed. Despite this, the relaxation times calculated with the CM
theory do not coincide with the experimental JG ones. This might be
due to the fact that the β relaxation is observed only as a
shoulder of the α peak, in which case it has been shown that
the fitting procedure that we employed does not reproduce the precursor
frequency predicted by the CM. It is nevertheless worth pointing out
that the difference at T g between the
theoretical times and the experimental ones can be off by as many
as two orders of magnitude (see b). We finally discuss the fastest secondary
relaxations observed in
our samples. These relaxations must stem from intramolecular degrees
of freedom. In the case of the benzodiazepine ring, the only degree
of freedom corresponds to the chirality inversion between P and M
conformers discussed in the previous section. Apart from this, all
three molecules possess a torsional degree of freedom corresponding
to the single covalent bond linking the fused benzodiazepine ring
with the six-membered carbon ring. There are two more degrees of freedom
in some of the derivatives, namely, the internal rotation of the methyl
group in DIA and TETRA, and a possible conformational interconversion
dynamics of the non-planar cyclohexene ring of TETRA. Neither of these
processes is expected to give rise to a dielectric relaxation feature,
due to the lack of dipole moment of either moiety, so that there are
only two possible candidates for the experimentally observed γ
and γ′ relaxations. As visible in the Angell plot
of b, neither
the γ nor the γ′
relaxation scales with the α relaxation or with the glass transition
temperature, which indicates that they correspond to local relaxation
processes of very low cooperativity. Looking at the relaxation maps
of a, it can
be seen that the three γ relaxations have very similar relaxation
times at a given fixed temperature in all three compounds and also
that the corresponding activation energies E a,γ are close for all studied diazepines . Instead, the α and
β relaxations have very different relaxation times between NOR
on one hand and DIA and TETRA on the other, as stated previously,
and the γ′ relaxation is quite separated in DIA and TETRA.
The similarity of the γ relaxation times and activation energy,
and the fact that this relaxation is unaffected by the distance from
the glass transition temperature suggest that the γ relaxation
is an intramolecular relaxation process common to all three diazepines. As mentioned in , all three studied benzodiazepines exist
in two possible
equivalent conformations of opposite chirality. Both conformers, P
and M, are present in the crystal phase of each compound. In the gas
phase and in solution, benzodiazepines are known to be relatively
flexible and to display inter-conversion dynamics between the two
equivalent conformations, accompanied by a reorientation by 60°
of the CH 2 moiety attached to the carbonyl group, as discussed,
e.g., by Mielcarek et al . The conformational dynamics of DIA and NOR was reported in previous
studies for molecules in solution, and it was found that the activation
energy was not significantly dependent on the solvent. The conformational
activation energies were found experimentally to be 74 and 52 kJ/mol
for DIA and NOR, respectively. , Because the conformational
transition is accompanied also by a
change in position of the polar carbonyl group and of the nitrogen
atoms and thus of the direction of the
molecular dipole moment, such conformational change should be observable
in dielectric spectroscopy. The fact that the γ relaxation is
observed in all three compounds at very similar relaxation times leads
us to assign this process to the inter-conversion dynamics between
P and M conformations (see inset to ). It can instead be ruled out that the γ′
relaxation can correspond to such dynamics, considering that the DIA
and NOR derivatives, which have identical fused benzodiazepine rings,
have γ′ relaxation times differing by more than two orders
of magnitude. It may seem surprising that the M–P interconversion
takes
place also in the liquid phase of NOR due to the presence of hydrogen
bonds. It must however be considered that the H-bond network in a
liquid phase is dynamic and in general only involves a fraction of
the molecules at a given time. The dielectric signal of the P–M
interconversion dynamics of NOR, namely, the γ relaxation of
this compound, likely stems from the fraction of molecules that are
not involved in H-bonding at a given time. It is worth pointing out,
in this respect, that the relaxation time and activation energies
are similar but not identical in the three compounds. We also remark
that the experimental values of the corresponding activation energy
in solution are roughly twice those of the γ relaxations reported
in . It should
however be kept in mind that the extent of H bonding will differ depending
on the liquid phase, and, more importantly, our measurements of the
γ dynamics are all in the glass state of the pure compound.
It is well-known that the temperature dependence of the structural
and JG relaxations displays an abrupt change at T g due to the loss of ergodic equilibrium when going from
the supercooled liquid to the glass phase. This is clearly visible
for the case of the β JG relaxation of benzodiazepine
in b, as discussed
earlier. The same effect is expected to be visible for any relaxation
process whose characteristic time is affected by the viscosity, and
it could be that the interconversion rate between P and M conformers
(γ relaxation time) is partially affected by changes of macroscopic
properties of the sample such as its viscosity (although it cannot
depend only on it, as b shows). Dielectric relaxation studies of flexible heterocyclic
molecules are relatively uncommon, and, to the best of our knowledge,
ours is one of the few dielectric spectroscopy studies that have provided
a clear identification of the ring conformational dynamics in polycyclic
molecules. , , Finally, concerning the γ′ relaxation, both the
range
of temperature in which it is observed and its characteristic relaxation
time are very different between DIA and TETRA, as mentioned, albeit
that its activation energy is of the same order of magnitude in both
compounds. Given that this relaxation is virtually absent in NOR,
it is likely that it is suppressed or at least strongly hindered by
the presence of intermolecular hydrogen bonds. All three studied benzodiazepines
have, as mentioned, a further degree of freedom, corresponding to
the torsional rotation around the covalent bond linking the fused
double ring with the six-membered carbon ring. , While the latter has basically no dipole moment, a rotation of the
double ring about this covalent bond could lead to a rigid rotation
of the molecular dipole moment, which would contribute a dielectric
loss signal. Therefore, we tentatively assign the γ′
relaxation to the rigid rotation, likely by a small angle, of the
double ring about its bond with the six-membered carbon ring. Such
rotation might be partially hindered, in the case of NOR, by the presence
of a network of intermolecular hydrogen bonds, which rationalizes
the extremely weak signal of the γ′ relaxation in this
compound. The difference between the γ′ activation energy
and relaxation times of DIA and TETRA might then be attributed to
the different steric hindrance of the two distinct six-member rings,
namely, a bulkier phenyl ring in the case of DIA and a non-planar
cyclohexene ring in the case of TETRA. This tentative interpretation
is consistent with the much faster γ′ relaxation dynamics
in TETRA. 3.3.3 Crystallization Kinetics Dielectric
spectroscopy was employed to determine the kinetics of isothermal
recrystallization from the supercooled liquid state of NOR and TETRA
(as mentioned, DIA was not observed to recrystallize in short times).
To this purpose, we acquired series of dielectric spectra at fixed
temperature and analyzed the variation in time of the static dielectric
constant, which is related to the dielectric intensity of the structural
relaxation process. Since NOR has significantly higher glass transition
temperature than TETRA, at temperatures at which the latter compound
showed recrystallization at detectable rates, NOR is close to being
in the glass state, where the recrystallization onset time and recrystallization
rates are too long to allow a dielectric measurement. Therefore, because
such “isothermal comparison” of the recrystallization
process cannot be carried out, we have chosen different temperatures
to study recrystallization at roughly the same reduced temperature T / T g . displays the series of isothermal
permittivity spectra (real and imaginary part) during recrystallization
of TETRA at T = 331 K (corresponding to T / T g,TETRA = 1.07) and of NOR at T = 375 K (corresponding to T / T g,NOR = 1.08). The effect of recrystallization is visible
as a decrease over time of the dielectric intensity of the α
loss feature, or equivalently a decrease of the static permittivity
value ε s , defined as the value of ε′( f ) at the lowest frequency displayed in the figure ( f = 1 Hz for TETRA and f = 2 Hz for NOR,
respectively). The onset time t o of the
recrystallization process was determined as the time at which the
initially constant value of ε s in the supercooled
liquid phase was observed to start decreasing. The evolution of ε s with time elapsed from the start of the recrystallization
is displayed in e. It is clear that the recrystallization of NOR at T / T g,NOR = 1.08 is slower than that of
TETRA at T / T g,TETRA =
1.07, despite the fact that the structural (α) relaxation frequency
and thus the cooperative mobility are, under such conditions, higher
by a factor of four in NOR than in TETRA, as testified by the position
of the loss maxima in panels (c) and (d) of . In order to study the kinetics of recrystallization, we define
as customary , a normalized static permittivity
value as: 7 Here, ε s (SL)
and ε s (C) are the
static permittivity of the supercooled liquid and the crystal phase,
as measured before the onset of nucleation of the crystal phase and
at the end of the crystal growth, respectively, while ε s ( t ) is the static permittivity of the partially
recrystallized sample as function of time. The global kinetics of
crystallization can be modeled with the help of the Avrami equation, , which is based on the nucleation-and-growth model of the transition
from the liquid to the crystal phase. According to this model, the
renormalized static permittivity should vary in time as: , 8 Here, n is
the Avrami exponent and Z is a constant from which
the recrystallization rate in s –1 can be obtained , as k = Z 1/ n . According to , the quantity ln(−ln(1 – ε n )) should
be linearly proportional to the logarithm of the time elapsed since
the onset of recrystallization, t – t o . This is indeed observed in the Avrami plot
displayed in f. The values of the obtained fit parameters are n = 1.01 ± 0.05, k = (7 ± 3)·10 –5 s –1 for TETRA and n = 1.1 ± 0.1, k = (4 ± 2)·10 –5 s –1 for NOR. The fact that the
value of the Avrami exponent is close to unity for both derivatives
indicates a strongly anisotropic (one-dimensional) growth of the crystal
phase after a sporadic nucleation event. , , A value of n = 1 also allows
direct estimation of the crystal growth rate, that is, separation
of the nucleation and crystal growth phases of the recrystallization. The vertical separation in f, in which assuming an identical
value of n can be related to the difference in recrystallization
rate k between the two samples (see the discussion
of of ref , confirms the slower crystal
growth kinetics directly visible in e, and is consistent with the experimental
ranges of values of the recrystallization rate k of
TETRA and NOR under these conditions. We also studied the recrystallization
of NOR at T = 368 K ( T/T g = 1.06). The latter temperature
was chosen so that the structural relaxation frequency was the same
for both compounds (a condition usually referred to as “isochronal
condition” in the scientific literature). Because the two compounds
have similar fragility indexes, this condition is very similar to
that of same reduced temperature, T / T g . The crystal growth rate of NOR was so slow under these
conditions (at a temperature only 5 K below the crystallization temperature
of ) that we
could not complete it during three full days of continuous measurements.
The crystallization (growth) rate k for NOR at 368
K ( k = (7 ± 3)·10 –6 s –1 ) was one order of magnitude smaller than that for
TETRA at 331 K, and our experiments show that the (homogeneous) nucleation
time is very different in DIA with respect to its derivatives.
Differential Scanning Calorimetry Results shows the
DSC traces obtained for the three diazepines DIA, NOR, and TETRA.
In all three cases, the as-received powders were completely crystalline,
as the first heating ramp only displayed a melting endotherm with
onsets at 404.1, 487.0, and 415.6 K for DIA, NOR, and TETRA, respectively.
Values coincide within the experimental error with those available
in the scientific literature. − , , The melting point of NOR and the enthalpy of melting are both significantly
higher than that of the other two derivatives, likely due to the presence
of N-H···O=C hydrogen bonds, which can only form in
demethylated derivative (see the next section). The subsequent
cooling ramp leads to a glassy phase for all three pharmaceuticals,
and on reheating, a step-like transition can be observed in the DSC
traces, corresponding to the glass transition temperature ( T g ). In most cases, though not in all DSC runs,
TETRA and NOR displayed (at least partial) recrystallization of the
supercooled liquid in the heat up run, followed again by the melting
peak (see inset to b). The recrystallized phase is the same as the initial one,
as the melting temperature is the same on heating the recrystallized
sample. The supercooled TETRA and NOR liquids were observed to crystallize
also in dielectric spectroscopy experiments (see ), while recrystallization
of DIA was absent also in this case. The sample geometry and the vessel
are quite different in DSC (droplet in aluminum pan) and dielectric
(film in stainless steel cylinder with silica spacers) experiments.
The fact that the three samples displayed the same tendency toward
recrystallization under such different experimental conditions indicates
that the recrystallization of TETRA and NOR probably took place by
homogeneous (rather than heterogeneous) nucleation of the crystal
phase. The characteristic onset temperatures of the glass transition,
recrystallization, and melting points are listed in for all three pharmaceutically
active compounds, together with the melting enthalpies. The recrystallization
temperature is only listed for completeness, as it did not always
occur in all DSC scans at the same temperature. This is not surprising,
as nucleation is a stochastic event that depends on the characteristics
of the sample (heterogeneous vs homogeneous nucleation) and its history
(e.g., cooling rate from the liquid phase, temperature at which it
is then kept). It may be seen that T m and T g roughly scale with one another:
the T g / T m ratio
is 0.78
for DIA, 0.71 for NOR, and 0.75 for TETRA. The values for TETRA and
DIA are quite similar, albeit T m is slightly
higher for TETRA than that for DIA, while T g is somewhat lower for TETRA than that for DIA. The glass transition
temperature is often found to display a correlation with the molecular
weight M w . In particular, the empirical
rule T g ≈ M w 1/2 appears to be fulfilled in the case of van
der Waals molecular liquids. Such correlation
probably reflects the fact that the extent of van der Waals interactions
increases with the molecular mass (due to the increase of molecular
polarizability and of the closest intermolecular contacts), and the
fact that, at a given fixed temperature, a massive molecule has lower
mobility, but it does not take into account hydrogen bonding or any
other type of directional intermolecular bonds. In fact, the glass
transition temperature of the studied diazepines does not correlate
with the molecular weight: NOR, which has the lowest weight, has the
highest glass transition temperature. The origin of the higher T g is likely the same as that of the higher T m , namely, the presence of intermolecular H-bonds
in the liquid phase of NOR. Indeed, in the absence of any H bonding
the aforementioned correlation of molecular weight and glass transition
temperature would result in a T g value
of NOR closer to those of DIA and TETRA, which is not observed.
X-ray Diffraction Results and Analysis All three compounds display, in the crystalline phase, the same monoclinic
space group (P 2 1 /c). The diazepine ring of all molecules
adopts a bent boat-like conformation, with two possible isoenergetic
conformers, which are mirror images of one another. The two conformers
have opposite chirality and are named P (plus) or M (minus) according
to the sign of the (O=)C–C(H 2 )–N=C
torsion angle (see the inset to ). All three crystals contain a 1:1 ratio of P and
M conformers. The geometry of the conformers is similar in all three
compounds. For example, the angle formed by the C=N bond with
the plane of the fused benzene ring is equal to 41.6, 38.5, and 48.6°
in crystalline DIA, NOR, and TETRA, respectively. The analysis
of the X-ray structures at room temperature shows unambiguously that
NOR is the only compound of the three related drugs studied that forms
strong hydrogen bonds in the crystalline state, namely, intermolecular
N–H···O bonds involving the amine nitrogen of
the diazepine ring and the carbonyl oxygen of the same group of a
nearest-neighbor molecule in the crystal structure (see ). This is in agreement with
the higher melting point and enthalpy of fusion of NOR compared with
the other two compounds . It is interesting to point out in this respect that while
in both crystalline DIA and TETRA the carbonyl group and the adjacent
methyl group are basically coplanar, with a H 3 C–N–C=O
torsion angle smaller than 2°, in the case of NOR, which is a priori the only compound where the corresponding (peptide)
moiety is expected to be planar due to the amide electronic resonance,
the H–N–C=O torsion angle is instead approximately
10°. Non-planar peptide bonds are not uncommon in H-bonded structures
such as proteins in their native state. In the case of crystalline NOR, the lack of planarity of the amide
group is likely a consequence of H-bond formation. A recent work by some of us has shown that DIA and
TETRA, while
not forming N–H···O bonds, display weak but
extensive C–H···O interactions between the electron-rich
carbonyl group and the weakly polar C–H bonds of CH 2 groups. While intermolecular N–H···O
bonds are at least partially present also in the amorphous state of
NOR, as testified by its much higher glass transition temperature
(see ),
it is unlikely that the C–H···O interactions
play any role in the amorphous state of the three compounds, as we
argue further in . A straightforward comparison of the hydrogen
bond scheme in the
solid state of the three compounds can be carried out based on the
analysis of the Hirshfeld surface areas (see ). This surface represents a particular way
of partitioning the overall electron density in a molecular crystal
into individual molecular units, which
provides a three-dimensional image of the close contacts in the crystal
by guaranteeing maximum proximity of the corresponding Hirshfeld volumes
of nearest-neighbor molecules. − The color code employed by convention
is that a yellow or red color indicates points of short intermolecular
contact, while blue indicates regions of the Hirshfeld surface corresponding
to directions in which the intermolecular distance is comparatively
longer. ,
adapted
from ref , shows
the key intermolecular contacts derived from the Hirshfeld surface
area analysis at room temperature in the crystalline state. It evidences
the relevance of the hydrogen bond scheme for these compounds and,
in particular, that of the O···H for NOR compared to
DIA and TETRA, in agreement with the role of the strong N–H···O
H–bond interaction in NOR. It is interesting to note that there is a correlation
between melting
point, density, and Hirshfeld surface and volume parameters . In particular,
the Hirshfeld molecular volume and surface and the Hirshfeld volume
normalized to molecular weight are the largest for TETRA, which has
the smallest density and the lowest T m of the three derivatives, and they are the smallest for NOR, which
has the largest density and highest T m . This correlation evidences the influence on the melting temperature
of the hydrogen bonds in crystalline NOR. We point out that
the correlation is instead not strictly verified
when considering the glass transition temperature of all derivatives,
as T g,DIA > T g,TETRA . However, as mentioned, the T g of NOR
is significantly higher than that of the other two compounds, which
is indicative of the presence of some H bonding also in the liquid
phase of this compound. Instead of tightly bound stable H-bonded dimers
in the liquid phase, only short-lived H bonds are expected to occur,
and it is likely that a given NOR molecule only takes part, at most,
in one H-bond at a time.
Broadband Dielectric Spectroscopy
Results In order to see in detail how the small difference
in molecular
formula as well as the relevance of the hydrogen-bond network between
the three studied benzodiazepines affects the molecular mobility and
conformational dynamics in the amorphous state, we carried out dielectric
spectroscopy experiments on all three compounds in their amorphous
states. shows
the dielectric loss function of the three compounds at few selected
temperatures, plotted against the frequency of the applied electric
field. 3.3.1 Structural Relaxation For all three
diazepines, the most intense loss peak is observed at high temperatures
, and corresponds
to the structural relaxation (or α relaxation) of the supercooled
liquid phase. Below the calorimetric glass transition temperature T g (at which τ α = 10 2 s), the peak frequency of the α relaxation lies outside
the experimental frequency window, and only the tail of the α
peak is observed. When the temperature is increased above T g , the onset of the cooperative relaxation dynamics
of the liquid phase is signaled by the appearance in the experimental
frequency window of the α peak maximum, which then shifts to
higher frequencies as the temperature is further increased. The intensity of the α loss feature of both DIA and NOR is
roughly constant above T g . Instead, recrystallization
upon heating can be clearly discerned in the series of loss spectra
in the case of TETRA. Indeed, at temperatures higher than 335 K the
dielectric intensity of the α peak of TETRA is observed to decrease
further and further as the amorphous fraction in the sample decreases
(the dielectric loss intensity is proportional to the number density
of molecules in the amorphous supercooled liquid state ). To analyze the relaxation dynamics of
the cooperative α relaxation
in detail, we fitted all dielectric spectra as the sum of several
Havriliak–Negami components (see ), each corresponding to a distinct relaxation,
in order to extract the temperature-dependent relaxation times ( , see the section). The fits are shown in along with experimental
data. We found in particular that the fit with Havriliak–Negami
curves resulted in a Cole–Davidson function for the structural
relaxation. It can be observed in that the α peak of each compound has
exactly
the same shape regardless of temperature: the isothermal spectra at
various temperatures could be superposed onto one another by rescaling
the frequency scale and the signal intensity to those of the loss
maximum. This master-curve scaling was employed in the fitting procedure,
by imposing the same Cole–Davidson (CD) exponent in all high-temperature
spectra of a given compound, as indicated for selected temperatures
in the three panels of . The CD exponent that best described the α peaks was
found to be b = 0.59 ± 0.03 for DIA and TETRA,
and b = 0.50 ± 0.02 for NOR. This result indicates
a slightly greater cooperativity for NOR with respect to DIA and TETRA, , possibly related to the presence of intermolecular H-bonds in NOR. shows the
α relaxation times of all three studied diazepines versus the
inverse temperature (Arrhenius plot). The α relaxation time
follows the Vogel–Fulcher–Tamman temperature-dependence
typical of cooperative structural relaxations: − 3 Here, τ 0 is
the characteristic time at infinite temperature, D is the fragility strength coefficient, and T 0 is the Vogel–Fulcher temperature. The so-called “kinetic”
or “dielectric” glass transition temperature T g of the sample is defined as the temperature
at which relaxation times reaches 100 s, i.e., where log 10 (τ α /[s]) = 2 (horizontal yellow line in a). The kinetic glass
transition temperatures are 312.6, 309.0, and 347.2 K for DIA, TETRA,
and NOR, respectively . These values are very similar to the ones found in DSC
(see ), as expected. It is interesting
to compare the dependence of the relaxation times
with the inverse temperature rescaled to T g (the so-called Angell plot), as shown in b. The reduced temperature T / T g is a measure of how far above or
deep into the glass state is a sample. Remarkably, we find that the
structural relaxation times of the three pharmaceuticals coincide
in the Angell plot, which means that despite the structural differences
and the almost 40 K of difference in T g (and even more in T m ), the supercooled
liquid of these pharmaceuticals behaves cooperatively in the same
way when the distance from T g is the same.
This result is reflected in the VFT parameters listed in (in particular, in
the similar value of the fragility strength coefficient D ), and it can also be seen in the values of the so-called fragility
index ( m p ) of the amorphous samples, which
is defined as: 4 The fragility
index is virtually the same, within the error, for
DIA, NOR, and TETRA. The fragility index has often been related to
the capacity of a sample to recrystallize when heated from the amorphous
to the liquid state. − This, however, is only an empirical generalization,
and the present case confirms that such empirical rule fails, given
the identical fragility of the three samples and their noticeable
difference in recrystallization behavior. Also, the apparent activation
energy at T g , i.e., the slope of the tangent
to the Arrhenius plot of the structural relaxation at the glass transition,
cannot be taken as a reliable predictor of the tendency toward nucleation:
in fact, this parameter is again virtually identical in the case of
DIA and TETRA (see ), which exhibit instead very distinct nucleation tendency. 3.3.2 Secondary Relaxations Besides the
α relaxation, three more secondary peaks were observed in the
loss spectra at higher frequency (or lower temperature) than the cooperative
loss , both
in the supercooled liquid and the glass states. One of the secondary
relaxations, which we label as β, can be observed in all three
cases as a high-frequency shoulder to the structural peak. Another
secondary peak (γ) is observed in the glass state of all three
compounds, i.e., at low temperatures. Finally, at the lowest temperatures
studied a third secondary peak (γ′) could be discerned
in DIA and TETRA. In the case of NOR, the loss intensity at frequencies
higher than that of the γ peak was very low, so that it would
appear that the γ′ relaxation was almost absent in this
compound. We have nonetheless performed a fit of this spectral region
for completeness. All secondary relaxations could be fitted with symmetric
Cole–Cole functions (see section). a displays the full Arrhenius relaxation
maps of DIA (half points), NOR (open points), and TETRA (solid points).
As visible in this figure, all secondary relaxations displayed a simply
activated dependence on temperature, described by the Arrhenius law: 5 where τ ∞ is the characteristic
time at very high (infinite) temperature (it
plays the same role as τ 0 in the VFT ), E a is the activation energy, and R is the universal
gas constant. The β relaxation of all
three compounds displayed a kink
at T ≈ T g ( b), where its activation
energy E a, β (proportional
to the slope in the Arrhenius or Angell plots) was found to change
discontinuously (it cannot be excluded that above T g , the activation energy of the β process is actually
slightly dependent on T ). This cross-over in the
temperature dependence is typical of the so-called Johari–Goldstein
(JG) secondary relaxation, a local whole-molecule relaxation that
is strongly correlated with the structural one and that is a feature
common to most glass formers. − It can be easily seen in a and a that the difference
in glass transition temperature is reflected both in the α and
β relaxations. In fact, at the same given temperature, both
α and β relaxation times are much longer for NOR than
for DIA or TETRA, corresponding to much slower molecular dynamics.
The analysis shown in b provides a means to further verify the JG character of the
β relaxation. In fact, the β relaxations of DIA, NOR,
and TETRA are observed to be virtually superposed in the Angell plot,
where the three compounds all display a kink at T g / T ≈ 1, and the β activation
energy below T g is virtually the same
(within the error) for all three compounds (see ). The fact that the (secondary) β
relaxation time scales with T g (which
as discussed in is actually related to the kinetic arrest of the α
relaxation) is typical of JG relaxations. The study of this type of relaxation is particularly relevant
for
amorphous drugs because several studies have brought forth the idea
that the kinetic stability of a molecular glass is correlated with
the secondary β relaxation. In particular, it has been argued
experimentally that a small-molecule glass is kinetically stable only
below the onset temperature of the JG relaxation, typically few tens
of degrees below T g . In the case of the diazepines, the relaxation time of the
β JG relaxation reaches the standard value of 100 s between
30 and 40 K below the T g of the compound.
In our experiments, NOR and TETRA displayed a tendency to recrystallize
above T g , while DIA did not. It should
be noted that the onset of the β relaxation is likely a minimal
requirement for recrystallization: in our experiments, supercooled
DIA was not observed to recrystallize during a period of few days
even above the onset of the α relaxation, i.e., above T g . − The main theoretical model
concerning the JG relaxation is the
Coupling Model (hereafter, CM). , The CM interprets the
JG relaxations as a local, non-cooperative whole-molecule process,
which acts as the “precursor” at shorter times of the
α relaxation. , The characteristic CM relaxation
times in the supercooled liquid state are given by the following approximated
equation, which should approximately equal the experimental JG relaxation
times: 6 Here, t c is the correlation time (usually
of the order of 2 ps) and n , called the coupling
parameter, is related to the Havriliak–Negami exponents of
the α relaxation by the approximate relation 1 – n = ( ab ) 1/1.23 . In the case of the studied diazepines, the Havriliak–Negami
function reduces to a Cole–Davidson equation with a single
exponent b , which is found to be independent of temperature,
so that the coupling parameter is constant and equal to n = 1 – ( b ) 1/1.23 . then predicts that the β
relaxation time is perfectly correlated with the structural relaxation
time and thus scales with T g , as indeed
observed. Despite this, the relaxation times calculated with the CM
theory do not coincide with the experimental JG ones. This might be
due to the fact that the β relaxation is observed only as a
shoulder of the α peak, in which case it has been shown that
the fitting procedure that we employed does not reproduce the precursor
frequency predicted by the CM. It is nevertheless worth pointing out
that the difference at T g between the
theoretical times and the experimental ones can be off by as many
as two orders of magnitude (see b). We finally discuss the fastest secondary
relaxations observed in
our samples. These relaxations must stem from intramolecular degrees
of freedom. In the case of the benzodiazepine ring, the only degree
of freedom corresponds to the chirality inversion between P and M
conformers discussed in the previous section. Apart from this, all
three molecules possess a torsional degree of freedom corresponding
to the single covalent bond linking the fused benzodiazepine ring
with the six-membered carbon ring. There are two more degrees of freedom
in some of the derivatives, namely, the internal rotation of the methyl
group in DIA and TETRA, and a possible conformational interconversion
dynamics of the non-planar cyclohexene ring of TETRA. Neither of these
processes is expected to give rise to a dielectric relaxation feature,
due to the lack of dipole moment of either moiety, so that there are
only two possible candidates for the experimentally observed γ
and γ′ relaxations. As visible in the Angell plot
of b, neither
the γ nor the γ′
relaxation scales with the α relaxation or with the glass transition
temperature, which indicates that they correspond to local relaxation
processes of very low cooperativity. Looking at the relaxation maps
of a, it can
be seen that the three γ relaxations have very similar relaxation
times at a given fixed temperature in all three compounds and also
that the corresponding activation energies E a,γ are close for all studied diazepines . Instead, the α and
β relaxations have very different relaxation times between NOR
on one hand and DIA and TETRA on the other, as stated previously,
and the γ′ relaxation is quite separated in DIA and TETRA.
The similarity of the γ relaxation times and activation energy,
and the fact that this relaxation is unaffected by the distance from
the glass transition temperature suggest that the γ relaxation
is an intramolecular relaxation process common to all three diazepines. As mentioned in , all three studied benzodiazepines exist
in two possible
equivalent conformations of opposite chirality. Both conformers, P
and M, are present in the crystal phase of each compound. In the gas
phase and in solution, benzodiazepines are known to be relatively
flexible and to display inter-conversion dynamics between the two
equivalent conformations, accompanied by a reorientation by 60°
of the CH 2 moiety attached to the carbonyl group, as discussed,
e.g., by Mielcarek et al . The conformational dynamics of DIA and NOR was reported in previous
studies for molecules in solution, and it was found that the activation
energy was not significantly dependent on the solvent. The conformational
activation energies were found experimentally to be 74 and 52 kJ/mol
for DIA and NOR, respectively. , Because the conformational
transition is accompanied also by a
change in position of the polar carbonyl group and of the nitrogen
atoms and thus of the direction of the
molecular dipole moment, such conformational change should be observable
in dielectric spectroscopy. The fact that the γ relaxation is
observed in all three compounds at very similar relaxation times leads
us to assign this process to the inter-conversion dynamics between
P and M conformations (see inset to ). It can instead be ruled out that the γ′
relaxation can correspond to such dynamics, considering that the DIA
and NOR derivatives, which have identical fused benzodiazepine rings,
have γ′ relaxation times differing by more than two orders
of magnitude. It may seem surprising that the M–P interconversion
takes
place also in the liquid phase of NOR due to the presence of hydrogen
bonds. It must however be considered that the H-bond network in a
liquid phase is dynamic and in general only involves a fraction of
the molecules at a given time. The dielectric signal of the P–M
interconversion dynamics of NOR, namely, the γ relaxation of
this compound, likely stems from the fraction of molecules that are
not involved in H-bonding at a given time. It is worth pointing out,
in this respect, that the relaxation time and activation energies
are similar but not identical in the three compounds. We also remark
that the experimental values of the corresponding activation energy
in solution are roughly twice those of the γ relaxations reported
in . It should
however be kept in mind that the extent of H bonding will differ depending
on the liquid phase, and, more importantly, our measurements of the
γ dynamics are all in the glass state of the pure compound.
It is well-known that the temperature dependence of the structural
and JG relaxations displays an abrupt change at T g due to the loss of ergodic equilibrium when going from
the supercooled liquid to the glass phase. This is clearly visible
for the case of the β JG relaxation of benzodiazepine
in b, as discussed
earlier. The same effect is expected to be visible for any relaxation
process whose characteristic time is affected by the viscosity, and
it could be that the interconversion rate between P and M conformers
(γ relaxation time) is partially affected by changes of macroscopic
properties of the sample such as its viscosity (although it cannot
depend only on it, as b shows). Dielectric relaxation studies of flexible heterocyclic
molecules are relatively uncommon, and, to the best of our knowledge,
ours is one of the few dielectric spectroscopy studies that have provided
a clear identification of the ring conformational dynamics in polycyclic
molecules. , , Finally, concerning the γ′ relaxation, both the
range
of temperature in which it is observed and its characteristic relaxation
time are very different between DIA and TETRA, as mentioned, albeit
that its activation energy is of the same order of magnitude in both
compounds. Given that this relaxation is virtually absent in NOR,
it is likely that it is suppressed or at least strongly hindered by
the presence of intermolecular hydrogen bonds. All three studied benzodiazepines
have, as mentioned, a further degree of freedom, corresponding to
the torsional rotation around the covalent bond linking the fused
double ring with the six-membered carbon ring. , While the latter has basically no dipole moment, a rotation of the
double ring about this covalent bond could lead to a rigid rotation
of the molecular dipole moment, which would contribute a dielectric
loss signal. Therefore, we tentatively assign the γ′
relaxation to the rigid rotation, likely by a small angle, of the
double ring about its bond with the six-membered carbon ring. Such
rotation might be partially hindered, in the case of NOR, by the presence
of a network of intermolecular hydrogen bonds, which rationalizes
the extremely weak signal of the γ′ relaxation in this
compound. The difference between the γ′ activation energy
and relaxation times of DIA and TETRA might then be attributed to
the different steric hindrance of the two distinct six-member rings,
namely, a bulkier phenyl ring in the case of DIA and a non-planar
cyclohexene ring in the case of TETRA. This tentative interpretation
is consistent with the much faster γ′ relaxation dynamics
in TETRA. 3.3.3 Crystallization Kinetics Dielectric
spectroscopy was employed to determine the kinetics of isothermal
recrystallization from the supercooled liquid state of NOR and TETRA
(as mentioned, DIA was not observed to recrystallize in short times).
To this purpose, we acquired series of dielectric spectra at fixed
temperature and analyzed the variation in time of the static dielectric
constant, which is related to the dielectric intensity of the structural
relaxation process. Since NOR has significantly higher glass transition
temperature than TETRA, at temperatures at which the latter compound
showed recrystallization at detectable rates, NOR is close to being
in the glass state, where the recrystallization onset time and recrystallization
rates are too long to allow a dielectric measurement. Therefore, because
such “isothermal comparison” of the recrystallization
process cannot be carried out, we have chosen different temperatures
to study recrystallization at roughly the same reduced temperature T / T g . displays the series of isothermal
permittivity spectra (real and imaginary part) during recrystallization
of TETRA at T = 331 K (corresponding to T / T g,TETRA = 1.07) and of NOR at T = 375 K (corresponding to T / T g,NOR = 1.08). The effect of recrystallization is visible
as a decrease over time of the dielectric intensity of the α
loss feature, or equivalently a decrease of the static permittivity
value ε s , defined as the value of ε′( f ) at the lowest frequency displayed in the figure ( f = 1 Hz for TETRA and f = 2 Hz for NOR,
respectively). The onset time t o of the
recrystallization process was determined as the time at which the
initially constant value of ε s in the supercooled
liquid phase was observed to start decreasing. The evolution of ε s with time elapsed from the start of the recrystallization
is displayed in e. It is clear that the recrystallization of NOR at T / T g,NOR = 1.08 is slower than that of
TETRA at T / T g,TETRA =
1.07, despite the fact that the structural (α) relaxation frequency
and thus the cooperative mobility are, under such conditions, higher
by a factor of four in NOR than in TETRA, as testified by the position
of the loss maxima in panels (c) and (d) of . In order to study the kinetics of recrystallization, we define
as customary , a normalized static permittivity
value as: 7 Here, ε s (SL)
and ε s (C) are the
static permittivity of the supercooled liquid and the crystal phase,
as measured before the onset of nucleation of the crystal phase and
at the end of the crystal growth, respectively, while ε s ( t ) is the static permittivity of the partially
recrystallized sample as function of time. The global kinetics of
crystallization can be modeled with the help of the Avrami equation, , which is based on the nucleation-and-growth model of the transition
from the liquid to the crystal phase. According to this model, the
renormalized static permittivity should vary in time as: , 8 Here, n is
the Avrami exponent and Z is a constant from which
the recrystallization rate in s –1 can be obtained , as k = Z 1/ n . According to , the quantity ln(−ln(1 – ε n )) should
be linearly proportional to the logarithm of the time elapsed since
the onset of recrystallization, t – t o . This is indeed observed in the Avrami plot
displayed in f. The values of the obtained fit parameters are n = 1.01 ± 0.05, k = (7 ± 3)·10 –5 s –1 for TETRA and n = 1.1 ± 0.1, k = (4 ± 2)·10 –5 s –1 for NOR. The fact that the
value of the Avrami exponent is close to unity for both derivatives
indicates a strongly anisotropic (one-dimensional) growth of the crystal
phase after a sporadic nucleation event. , , A value of n = 1 also allows
direct estimation of the crystal growth rate, that is, separation
of the nucleation and crystal growth phases of the recrystallization. The vertical separation in f, in which assuming an identical
value of n can be related to the difference in recrystallization
rate k between the two samples (see the discussion
of of ref , confirms the slower crystal
growth kinetics directly visible in e, and is consistent with the experimental
ranges of values of the recrystallization rate k of
TETRA and NOR under these conditions. We also studied the recrystallization
of NOR at T = 368 K ( T/T g = 1.06). The latter temperature
was chosen so that the structural relaxation frequency was the same
for both compounds (a condition usually referred to as “isochronal
condition” in the scientific literature). Because the two compounds
have similar fragility indexes, this condition is very similar to
that of same reduced temperature, T / T g . The crystal growth rate of NOR was so slow under these
conditions (at a temperature only 5 K below the crystallization temperature
of ) that we
could not complete it during three full days of continuous measurements.
The crystallization (growth) rate k for NOR at 368
K ( k = (7 ± 3)·10 –6 s –1 ) was one order of magnitude smaller than that for
TETRA at 331 K, and our experiments show that the (homogeneous) nucleation
time is very different in DIA with respect to its derivatives.
Structural Relaxation For all three
diazepines, the most intense loss peak is observed at high temperatures
, and corresponds
to the structural relaxation (or α relaxation) of the supercooled
liquid phase. Below the calorimetric glass transition temperature T g (at which τ α = 10 2 s), the peak frequency of the α relaxation lies outside
the experimental frequency window, and only the tail of the α
peak is observed. When the temperature is increased above T g , the onset of the cooperative relaxation dynamics
of the liquid phase is signaled by the appearance in the experimental
frequency window of the α peak maximum, which then shifts to
higher frequencies as the temperature is further increased. The intensity of the α loss feature of both DIA and NOR is
roughly constant above T g . Instead, recrystallization
upon heating can be clearly discerned in the series of loss spectra
in the case of TETRA. Indeed, at temperatures higher than 335 K the
dielectric intensity of the α peak of TETRA is observed to decrease
further and further as the amorphous fraction in the sample decreases
(the dielectric loss intensity is proportional to the number density
of molecules in the amorphous supercooled liquid state ). To analyze the relaxation dynamics of
the cooperative α relaxation
in detail, we fitted all dielectric spectra as the sum of several
Havriliak–Negami components (see ), each corresponding to a distinct relaxation,
in order to extract the temperature-dependent relaxation times ( , see the section). The fits are shown in along with experimental
data. We found in particular that the fit with Havriliak–Negami
curves resulted in a Cole–Davidson function for the structural
relaxation. It can be observed in that the α peak of each compound has
exactly
the same shape regardless of temperature: the isothermal spectra at
various temperatures could be superposed onto one another by rescaling
the frequency scale and the signal intensity to those of the loss
maximum. This master-curve scaling was employed in the fitting procedure,
by imposing the same Cole–Davidson (CD) exponent in all high-temperature
spectra of a given compound, as indicated for selected temperatures
in the three panels of . The CD exponent that best described the α peaks was
found to be b = 0.59 ± 0.03 for DIA and TETRA,
and b = 0.50 ± 0.02 for NOR. This result indicates
a slightly greater cooperativity for NOR with respect to DIA and TETRA, , possibly related to the presence of intermolecular H-bonds in NOR. shows the
α relaxation times of all three studied diazepines versus the
inverse temperature (Arrhenius plot). The α relaxation time
follows the Vogel–Fulcher–Tamman temperature-dependence
typical of cooperative structural relaxations: − 3 Here, τ 0 is
the characteristic time at infinite temperature, D is the fragility strength coefficient, and T 0 is the Vogel–Fulcher temperature. The so-called “kinetic”
or “dielectric” glass transition temperature T g of the sample is defined as the temperature
at which relaxation times reaches 100 s, i.e., where log 10 (τ α /[s]) = 2 (horizontal yellow line in a). The kinetic glass
transition temperatures are 312.6, 309.0, and 347.2 K for DIA, TETRA,
and NOR, respectively . These values are very similar to the ones found in DSC
(see ), as expected. It is interesting
to compare the dependence of the relaxation times
with the inverse temperature rescaled to T g (the so-called Angell plot), as shown in b. The reduced temperature T / T g is a measure of how far above or
deep into the glass state is a sample. Remarkably, we find that the
structural relaxation times of the three pharmaceuticals coincide
in the Angell plot, which means that despite the structural differences
and the almost 40 K of difference in T g (and even more in T m ), the supercooled
liquid of these pharmaceuticals behaves cooperatively in the same
way when the distance from T g is the same.
This result is reflected in the VFT parameters listed in (in particular, in
the similar value of the fragility strength coefficient D ), and it can also be seen in the values of the so-called fragility
index ( m p ) of the amorphous samples, which
is defined as: 4 The fragility
index is virtually the same, within the error, for
DIA, NOR, and TETRA. The fragility index has often been related to
the capacity of a sample to recrystallize when heated from the amorphous
to the liquid state. − This, however, is only an empirical generalization,
and the present case confirms that such empirical rule fails, given
the identical fragility of the three samples and their noticeable
difference in recrystallization behavior. Also, the apparent activation
energy at T g , i.e., the slope of the tangent
to the Arrhenius plot of the structural relaxation at the glass transition,
cannot be taken as a reliable predictor of the tendency toward nucleation:
in fact, this parameter is again virtually identical in the case of
DIA and TETRA (see ), which exhibit instead very distinct nucleation tendency.
Secondary Relaxations Besides the
α relaxation, three more secondary peaks were observed in the
loss spectra at higher frequency (or lower temperature) than the cooperative
loss , both
in the supercooled liquid and the glass states. One of the secondary
relaxations, which we label as β, can be observed in all three
cases as a high-frequency shoulder to the structural peak. Another
secondary peak (γ) is observed in the glass state of all three
compounds, i.e., at low temperatures. Finally, at the lowest temperatures
studied a third secondary peak (γ′) could be discerned
in DIA and TETRA. In the case of NOR, the loss intensity at frequencies
higher than that of the γ peak was very low, so that it would
appear that the γ′ relaxation was almost absent in this
compound. We have nonetheless performed a fit of this spectral region
for completeness. All secondary relaxations could be fitted with symmetric
Cole–Cole functions (see section). a displays the full Arrhenius relaxation
maps of DIA (half points), NOR (open points), and TETRA (solid points).
As visible in this figure, all secondary relaxations displayed a simply
activated dependence on temperature, described by the Arrhenius law: 5 where τ ∞ is the characteristic
time at very high (infinite) temperature (it
plays the same role as τ 0 in the VFT ), E a is the activation energy, and R is the universal
gas constant. The β relaxation of all
three compounds displayed a kink
at T ≈ T g ( b), where its activation
energy E a, β (proportional
to the slope in the Arrhenius or Angell plots) was found to change
discontinuously (it cannot be excluded that above T g , the activation energy of the β process is actually
slightly dependent on T ). This cross-over in the
temperature dependence is typical of the so-called Johari–Goldstein
(JG) secondary relaxation, a local whole-molecule relaxation that
is strongly correlated with the structural one and that is a feature
common to most glass formers. − It can be easily seen in a and a that the difference
in glass transition temperature is reflected both in the α and
β relaxations. In fact, at the same given temperature, both
α and β relaxation times are much longer for NOR than
for DIA or TETRA, corresponding to much slower molecular dynamics.
The analysis shown in b provides a means to further verify the JG character of the
β relaxation. In fact, the β relaxations of DIA, NOR,
and TETRA are observed to be virtually superposed in the Angell plot,
where the three compounds all display a kink at T g / T ≈ 1, and the β activation
energy below T g is virtually the same
(within the error) for all three compounds (see ). The fact that the (secondary) β
relaxation time scales with T g (which
as discussed in is actually related to the kinetic arrest of the α
relaxation) is typical of JG relaxations. The study of this type of relaxation is particularly relevant
for
amorphous drugs because several studies have brought forth the idea
that the kinetic stability of a molecular glass is correlated with
the secondary β relaxation. In particular, it has been argued
experimentally that a small-molecule glass is kinetically stable only
below the onset temperature of the JG relaxation, typically few tens
of degrees below T g . In the case of the diazepines, the relaxation time of the
β JG relaxation reaches the standard value of 100 s between
30 and 40 K below the T g of the compound.
In our experiments, NOR and TETRA displayed a tendency to recrystallize
above T g , while DIA did not. It should
be noted that the onset of the β relaxation is likely a minimal
requirement for recrystallization: in our experiments, supercooled
DIA was not observed to recrystallize during a period of few days
even above the onset of the α relaxation, i.e., above T g . − The main theoretical model
concerning the JG relaxation is the
Coupling Model (hereafter, CM). , The CM interprets the
JG relaxations as a local, non-cooperative whole-molecule process,
which acts as the “precursor” at shorter times of the
α relaxation. , The characteristic CM relaxation
times in the supercooled liquid state are given by the following approximated
equation, which should approximately equal the experimental JG relaxation
times: 6 Here, t c is the correlation time (usually
of the order of 2 ps) and n , called the coupling
parameter, is related to the Havriliak–Negami exponents of
the α relaxation by the approximate relation 1 – n = ( ab ) 1/1.23 . In the case of the studied diazepines, the Havriliak–Negami
function reduces to a Cole–Davidson equation with a single
exponent b , which is found to be independent of temperature,
so that the coupling parameter is constant and equal to n = 1 – ( b ) 1/1.23 . then predicts that the β
relaxation time is perfectly correlated with the structural relaxation
time and thus scales with T g , as indeed
observed. Despite this, the relaxation times calculated with the CM
theory do not coincide with the experimental JG ones. This might be
due to the fact that the β relaxation is observed only as a
shoulder of the α peak, in which case it has been shown that
the fitting procedure that we employed does not reproduce the precursor
frequency predicted by the CM. It is nevertheless worth pointing out
that the difference at T g between the
theoretical times and the experimental ones can be off by as many
as two orders of magnitude (see b). We finally discuss the fastest secondary
relaxations observed in
our samples. These relaxations must stem from intramolecular degrees
of freedom. In the case of the benzodiazepine ring, the only degree
of freedom corresponds to the chirality inversion between P and M
conformers discussed in the previous section. Apart from this, all
three molecules possess a torsional degree of freedom corresponding
to the single covalent bond linking the fused benzodiazepine ring
with the six-membered carbon ring. There are two more degrees of freedom
in some of the derivatives, namely, the internal rotation of the methyl
group in DIA and TETRA, and a possible conformational interconversion
dynamics of the non-planar cyclohexene ring of TETRA. Neither of these
processes is expected to give rise to a dielectric relaxation feature,
due to the lack of dipole moment of either moiety, so that there are
only two possible candidates for the experimentally observed γ
and γ′ relaxations. As visible in the Angell plot
of b, neither
the γ nor the γ′
relaxation scales with the α relaxation or with the glass transition
temperature, which indicates that they correspond to local relaxation
processes of very low cooperativity. Looking at the relaxation maps
of a, it can
be seen that the three γ relaxations have very similar relaxation
times at a given fixed temperature in all three compounds and also
that the corresponding activation energies E a,γ are close for all studied diazepines . Instead, the α and
β relaxations have very different relaxation times between NOR
on one hand and DIA and TETRA on the other, as stated previously,
and the γ′ relaxation is quite separated in DIA and TETRA.
The similarity of the γ relaxation times and activation energy,
and the fact that this relaxation is unaffected by the distance from
the glass transition temperature suggest that the γ relaxation
is an intramolecular relaxation process common to all three diazepines. As mentioned in , all three studied benzodiazepines exist
in two possible
equivalent conformations of opposite chirality. Both conformers, P
and M, are present in the crystal phase of each compound. In the gas
phase and in solution, benzodiazepines are known to be relatively
flexible and to display inter-conversion dynamics between the two
equivalent conformations, accompanied by a reorientation by 60°
of the CH 2 moiety attached to the carbonyl group, as discussed,
e.g., by Mielcarek et al . The conformational dynamics of DIA and NOR was reported in previous
studies for molecules in solution, and it was found that the activation
energy was not significantly dependent on the solvent. The conformational
activation energies were found experimentally to be 74 and 52 kJ/mol
for DIA and NOR, respectively. , Because the conformational
transition is accompanied also by a
change in position of the polar carbonyl group and of the nitrogen
atoms and thus of the direction of the
molecular dipole moment, such conformational change should be observable
in dielectric spectroscopy. The fact that the γ relaxation is
observed in all three compounds at very similar relaxation times leads
us to assign this process to the inter-conversion dynamics between
P and M conformations (see inset to ). It can instead be ruled out that the γ′
relaxation can correspond to such dynamics, considering that the DIA
and NOR derivatives, which have identical fused benzodiazepine rings,
have γ′ relaxation times differing by more than two orders
of magnitude. It may seem surprising that the M–P interconversion
takes
place also in the liquid phase of NOR due to the presence of hydrogen
bonds. It must however be considered that the H-bond network in a
liquid phase is dynamic and in general only involves a fraction of
the molecules at a given time. The dielectric signal of the P–M
interconversion dynamics of NOR, namely, the γ relaxation of
this compound, likely stems from the fraction of molecules that are
not involved in H-bonding at a given time. It is worth pointing out,
in this respect, that the relaxation time and activation energies
are similar but not identical in the three compounds. We also remark
that the experimental values of the corresponding activation energy
in solution are roughly twice those of the γ relaxations reported
in . It should
however be kept in mind that the extent of H bonding will differ depending
on the liquid phase, and, more importantly, our measurements of the
γ dynamics are all in the glass state of the pure compound.
It is well-known that the temperature dependence of the structural
and JG relaxations displays an abrupt change at T g due to the loss of ergodic equilibrium when going from
the supercooled liquid to the glass phase. This is clearly visible
for the case of the β JG relaxation of benzodiazepine
in b, as discussed
earlier. The same effect is expected to be visible for any relaxation
process whose characteristic time is affected by the viscosity, and
it could be that the interconversion rate between P and M conformers
(γ relaxation time) is partially affected by changes of macroscopic
properties of the sample such as its viscosity (although it cannot
depend only on it, as b shows). Dielectric relaxation studies of flexible heterocyclic
molecules are relatively uncommon, and, to the best of our knowledge,
ours is one of the few dielectric spectroscopy studies that have provided
a clear identification of the ring conformational dynamics in polycyclic
molecules. , , Finally, concerning the γ′ relaxation, both the
range
of temperature in which it is observed and its characteristic relaxation
time are very different between DIA and TETRA, as mentioned, albeit
that its activation energy is of the same order of magnitude in both
compounds. Given that this relaxation is virtually absent in NOR,
it is likely that it is suppressed or at least strongly hindered by
the presence of intermolecular hydrogen bonds. All three studied benzodiazepines
have, as mentioned, a further degree of freedom, corresponding to
the torsional rotation around the covalent bond linking the fused
double ring with the six-membered carbon ring. , While the latter has basically no dipole moment, a rotation of the
double ring about this covalent bond could lead to a rigid rotation
of the molecular dipole moment, which would contribute a dielectric
loss signal. Therefore, we tentatively assign the γ′
relaxation to the rigid rotation, likely by a small angle, of the
double ring about its bond with the six-membered carbon ring. Such
rotation might be partially hindered, in the case of NOR, by the presence
of a network of intermolecular hydrogen bonds, which rationalizes
the extremely weak signal of the γ′ relaxation in this
compound. The difference between the γ′ activation energy
and relaxation times of DIA and TETRA might then be attributed to
the different steric hindrance of the two distinct six-member rings,
namely, a bulkier phenyl ring in the case of DIA and a non-planar
cyclohexene ring in the case of TETRA. This tentative interpretation
is consistent with the much faster γ′ relaxation dynamics
in TETRA.
Crystallization Kinetics Dielectric
spectroscopy was employed to determine the kinetics of isothermal
recrystallization from the supercooled liquid state of NOR and TETRA
(as mentioned, DIA was not observed to recrystallize in short times).
To this purpose, we acquired series of dielectric spectra at fixed
temperature and analyzed the variation in time of the static dielectric
constant, which is related to the dielectric intensity of the structural
relaxation process. Since NOR has significantly higher glass transition
temperature than TETRA, at temperatures at which the latter compound
showed recrystallization at detectable rates, NOR is close to being
in the glass state, where the recrystallization onset time and recrystallization
rates are too long to allow a dielectric measurement. Therefore, because
such “isothermal comparison” of the recrystallization
process cannot be carried out, we have chosen different temperatures
to study recrystallization at roughly the same reduced temperature T / T g . displays the series of isothermal
permittivity spectra (real and imaginary part) during recrystallization
of TETRA at T = 331 K (corresponding to T / T g,TETRA = 1.07) and of NOR at T = 375 K (corresponding to T / T g,NOR = 1.08). The effect of recrystallization is visible
as a decrease over time of the dielectric intensity of the α
loss feature, or equivalently a decrease of the static permittivity
value ε s , defined as the value of ε′( f ) at the lowest frequency displayed in the figure ( f = 1 Hz for TETRA and f = 2 Hz for NOR,
respectively). The onset time t o of the
recrystallization process was determined as the time at which the
initially constant value of ε s in the supercooled
liquid phase was observed to start decreasing. The evolution of ε s with time elapsed from the start of the recrystallization
is displayed in e. It is clear that the recrystallization of NOR at T / T g,NOR = 1.08 is slower than that of
TETRA at T / T g,TETRA =
1.07, despite the fact that the structural (α) relaxation frequency
and thus the cooperative mobility are, under such conditions, higher
by a factor of four in NOR than in TETRA, as testified by the position
of the loss maxima in panels (c) and (d) of . In order to study the kinetics of recrystallization, we define
as customary , a normalized static permittivity
value as: 7 Here, ε s (SL)
and ε s (C) are the
static permittivity of the supercooled liquid and the crystal phase,
as measured before the onset of nucleation of the crystal phase and
at the end of the crystal growth, respectively, while ε s ( t ) is the static permittivity of the partially
recrystallized sample as function of time. The global kinetics of
crystallization can be modeled with the help of the Avrami equation, , which is based on the nucleation-and-growth model of the transition
from the liquid to the crystal phase. According to this model, the
renormalized static permittivity should vary in time as: , 8 Here, n is
the Avrami exponent and Z is a constant from which
the recrystallization rate in s –1 can be obtained , as k = Z 1/ n . According to , the quantity ln(−ln(1 – ε n )) should
be linearly proportional to the logarithm of the time elapsed since
the onset of recrystallization, t – t o . This is indeed observed in the Avrami plot
displayed in f. The values of the obtained fit parameters are n = 1.01 ± 0.05, k = (7 ± 3)·10 –5 s –1 for TETRA and n = 1.1 ± 0.1, k = (4 ± 2)·10 –5 s –1 for NOR. The fact that the
value of the Avrami exponent is close to unity for both derivatives
indicates a strongly anisotropic (one-dimensional) growth of the crystal
phase after a sporadic nucleation event. , , A value of n = 1 also allows
direct estimation of the crystal growth rate, that is, separation
of the nucleation and crystal growth phases of the recrystallization. The vertical separation in f, in which assuming an identical
value of n can be related to the difference in recrystallization
rate k between the two samples (see the discussion
of of ref , confirms the slower crystal
growth kinetics directly visible in e, and is consistent with the experimental
ranges of values of the recrystallization rate k of
TETRA and NOR under these conditions. We also studied the recrystallization
of NOR at T = 368 K ( T/T g = 1.06). The latter temperature
was chosen so that the structural relaxation frequency was the same
for both compounds (a condition usually referred to as “isochronal
condition” in the scientific literature). Because the two compounds
have similar fragility indexes, this condition is very similar to
that of same reduced temperature, T / T g . The crystal growth rate of NOR was so slow under these
conditions (at a temperature only 5 K below the crystallization temperature
of ) that we
could not complete it during three full days of continuous measurements.
The crystallization (growth) rate k for NOR at 368
K ( k = (7 ± 3)·10 –6 s –1 ) was one order of magnitude smaller than that for
TETRA at 331 K, and our experiments show that the (homogeneous) nucleation
time is very different in DIA with respect to its derivatives.
Discussion These results on three very similar
molecules have important implications.
Several recent studies on different glass former compounds have reported
that the crystallization time (or equivalently the inverse crystallization
rate) and the structural relaxation time are correlated with one another. , , These studies have shown that
there is a power-law correlation between the recrystallization time
and τ α . Our study of very similar molecular
derivatives shows, in a very direct way, that there cannot be a general
quantitative relation between the absolute numerical values of these two quantities in different samples. This is not surprising
in view of the fact that different compounds have, in general, different
power law exponents; , our study further shows that
even related molecular derivatives have different correlation laws.
Hence, the correlation between τ α and the crystallization
growth rate is not only limited to a temperature interval, as implied
by the standard model of crystallization by nucleation and growth
and as shown experimentally in a recent study of ours but also it cannot be used as an a priori predictor of crystallization tendency or rate. Indeed, our study
confirms that supercooled liquids of very similar glass-former molecules
have, at the same value of τ α , not only very
different nucleation times but also quite distinct crystal growth
rates, depending, in the present case, on the extent of hydrogen bonding.
These results are in agreement with the standard model of crystallization
by nucleation and growth: in fact, the nucleation step is mainly determined
by the difference between bulk free energy and by the interfacial
tension of the liquid and crystalline phases, rather than the molecular
mobility; and similarly, the growth kinetics of crystalline nuclei
is not uniquely determined by the molecular mobility alone. Our findings
imply that, to further improve our experimental understanding of the
kinetic stability of amorphous pharmaceutics, correlations with other
(possibly macroscopic) quantities, related to the local structure
in the liquid and crystal states, should be investigated, beyond that
with the structural mobility or viscosity. To summarize, we
have studied three diazepine derivatives of very
similar mass and molecular structure (Diazepam, Nordazepam and Tetrazepam),
to determine how the differences in the molecular structure and thus
intermolecular interactions affect the properties of the crystalline
and amorphous states of these pharmaceutical compounds. Nordazepam
is the only compound that displays N–H···O hydrogen
bonds, leading to the formation of H-bonded dimers in the crystalline
phase, which as a consequence exhibits significantly higher melting
point and melting enthalpy compared to the other two compounds, which
display similar melting temperatures and enthalpies. Nordazepam has
the highest density in the crystalline state and the smallest Hirshfeld
surface and volume of the three. The diazepine ring has a non-planar
structure, and all three benzodiazepine crystalline structures consist
of two isoenergetic P and M conformers, which are mirror images of
one another and occur in a 1:1 ratio. The characteristic angles of
these conformations are similar in the three compounds. The
liquid phase of Nordazepam displays significantly higher glass
transition temperature than the other two compounds, and the dielectric
signature of the structural α relaxation is broader in this
compound than in the other two, indicative of a more cooperative structural
relaxation dynamics. These two experimental observations indicate
at least partial hydrogen bonding also in the liquid phase of Nordazepam.
The presence of different possible molecular conformations, as well
as the torsional degree of freedom between the fused double ring and
the six-membered carbon ring, further enrich the relaxation map in
the amorphous (supercooled liquid and glass) state. All three compounds
display a Johari–Goldstein β relaxation, visible as a
shoulder to the main α loss feature. The relaxation time of
both α and β relaxations scales with the temperature normalized
to the glass transition temperature ( T / T g ). The curvature of the structural relaxation is the
same in all three compounds leading to a virtually identical kinetic
fragility index ( m p ≈ 32). The three compounds display intramolecular relaxations in the glass
state, one of which is common to all of them, and corresponds to the
P-M inter-conformer conversion dynamics of the diazepine heterocycle.
This relaxation does not scale with the cooperative molecular mobility
(α relaxation time), although comparison with liquid-phase studies
indicates that its activation energy is slightly lower in the glass
state compared to the liquid. A fourth, high-frequency secondary relaxation
is present only in Diazepam and Tetrazepam, likely associated with
the rigid rotation of the fused double ring relative to the apolar
six-membered ring. Its almost complete absence in Nordazepam can be
rationalized by the existence of strong hydrogen bonds between the
double rings of neighboring molecules, which prevents such rotation. While supercooled liquid Tetrazepam and Nordazepam are observed
to recrystallize upon heating, with Avrami exponents close to unity
in both cases, Diazepam does not display any tendency toward recrystallization
at least over short periods of time. The crystallization rates of
Tetrazepam and Nordazepam differ, under isochronal conditions of the
structural α relaxation, by more than a decade. We conclude
that the kinetic stability of amorphous diazepines, and especially
the nucleation tendency, does not display any correlation with the
density, kinetic fragility index, or structural or secondary Johari–Goldstein
relaxation time. Only the crystal growth rate, and not the tendency
toward nucleation, is affected by the presence of a hydrogen-bond
network. Our comparison between very similar molecular derivatives
provides a direct confirmation that the search for microscopic criteria
for the kinetic stability of amorphous pharmaceuticals must include,
besides molecular interactions and relaxation dynamics, other parameters
related to the difference in the (local) structure between the liquid
and crystal phases.
|
A Comprehensive Analysis of COVID-19 Misinformation, Public Health Impacts, and Communication Strategies: Scoping Review | b99400c2-670b-4b8c-b350-52c783f49fea | 11375383 | Health Communication[mh] | Background The COVID-19 pandemic, a health crisis of unprecedented scale in the 21st century, was accompanied by an equally significant and dangerous phenomenon—an infodemic . The World Health Organization defines an infodemic as the rapid spread and overabundance of information—both accurate and false—that occurs during an epidemic . A tidal wave of misinformation, disinformation, and rumors characterized the infodemic during the COVID-19 pandemic. This led to widespread confusion, mistrust in health authorities, noncompliance with health guidelines, and even risky health behaviors . Moreover, the role of political leaders in shaping the narrative around COVID-19 policies significantly influenced these dynamics. In countries such as the United States, Brazil, and Turkey, the intersection of political ideology and crisis management led to increased societal polarization. Leaders in these nations used communication strategies ranging from denying the severity of the pandemic to promoting unproven treatments . This complex interplay between leadership communication and public response underscores the critical need for science-based policy communication and the responsible use of social media platforms to combat misinformation and foster societal unity in the face of a global health crisis. Furthermore, the emergence of the COVID-19 infodemic highlighted the crucial role of social media literacy in combating misinformation. Educating the public on discerning credible information on the web has emerged as a pivotal strategy for mitigating the spread of misinformation and its consequences . Misinformation during public health crises has been a recurring problem. Historical examples from the Ebola outbreak, such as rumors that the virus was a government creation or that certain local practices could cure the disease, highlight how misinformation can hinder public health responses . False beliefs, such as that drinking salt water would cure Ebola or that the disease was spread through the air, led to a mistrust of health workers and avoidance of treatment centers, exacerbating the crisis . In the context of COVID-19, misinformation was particularly pervasive, with false claims about the effectiveness of various nostrums, leading to panic buying and shortages . The impact of such misinformation varied across regions . These dynamics were often fueled by psychological and social factors, including fear, uncertainty, and the reinforcing nature of social media algorithms, which created echo chambers of false information . The wide-ranging consequences affected not only immediate health behaviors but also the trust in, and response to, public health authorities . Misinformation during a public health crisis is nothing new. However, the scale and speed at which misinformation spread during the COVID-19 pandemic are unparalleled. This situation was exacerbated by the widespread use of social media and the internet, where rumors can rapidly reach large audiences . This spread of misinformation had far-reaching consequences: it undermined public health efforts, promoted harmful practices, contributed to vaccine hesitancy, and possibly prolonged the pandemic . These effects went beyond individual health behaviors; they influenced public health policies and diminished trust in health authorities and the scientific community . In light of these challenges, the machine learning–enhanced graph analytics (MEGA) framework has emerged as a novel approach to managing infodemics by leveraging the power of machine learning and graph analytics. This framework offers a robust method for detecting spambots and influential spreaders in social media networks, which is crucial for assessing and mitigating the risks associated with infodemics. Such advanced tools are essential for public health officials and policy makers to navigate the complex landscape of misinformation and to develop more effective communication strategies . Furthermore, combating this infodemic necessitates a strategic approach encapsulating the “Four Pillars of Infodemic Management”: (1) monitoring information (infoveillance) to track the spread and impact of misinformation; (2) enhancing eHealth literacy and science literacy, empowering individuals to evaluate information critically; (3) refining knowledge quality through processes such as fact checking and peer review, ensuring the reliability of information; and (4) ensuring timely and accurate knowledge translation, minimizing the distortion by political or commercial interests . These measures are essential for mitigating the impact of misinformation and guiding the public and professionals toward quality health information during the pandemic and beyond. The COVID-19 pandemic has highlighted the need for improved public health communication and preparedness strategies, particularly in countering misinformation to prevent similar challenges in future health crises . Pertinent Questions Recognizing the unique challenges posed by the COVID-19 infodemic, this comprehensive scoping review seeks to systematically explore various dimensions of misinformation related to the pandemic. Our investigation is informed by a critical analysis of existing literature, noting a gap in studies that collectively examine the themes, sources, target audiences, impacts, interventions, and effectiveness of public health communication strategies against COVID-19 misinformation. To the best of our knowledge, this is the first review that attempts to bridge this gap by providing a comprehensive and integrated analysis of these key dimensions. While individual aspects of misinformation have been addressed in prior research, there lacks a comprehensive review that integrates these components to offer a holistic understanding necessary for effective countermeasures. Therefore, our review is structured around four pertinent questions, each carefully selected for their significance in advancing our understanding of the COVID-19 infodemic and its counteraction: What is the extent of COVID-19 misinformation? How can it be addressed? What are the primary sources of COVID-19 misinformation? Which target audiences are most affected by COVID-19 misinformation? What public health communication strategies are being used to combat COVID-19 misinformation? These questions were selected to emphasize critical areas of COVID-19 misinformation that, when addressed, can significantly contribute to bridging technical and knowledge gaps in our response to current and future public health emergencies. By detailing our study’s contributions to existing literature, we aim to present distinctive understandings crucial for policy makers, health professionals, and the public in effectively addressing misinformation challenges. The COVID-19 pandemic, a health crisis of unprecedented scale in the 21st century, was accompanied by an equally significant and dangerous phenomenon—an infodemic . The World Health Organization defines an infodemic as the rapid spread and overabundance of information—both accurate and false—that occurs during an epidemic . A tidal wave of misinformation, disinformation, and rumors characterized the infodemic during the COVID-19 pandemic. This led to widespread confusion, mistrust in health authorities, noncompliance with health guidelines, and even risky health behaviors . Moreover, the role of political leaders in shaping the narrative around COVID-19 policies significantly influenced these dynamics. In countries such as the United States, Brazil, and Turkey, the intersection of political ideology and crisis management led to increased societal polarization. Leaders in these nations used communication strategies ranging from denying the severity of the pandemic to promoting unproven treatments . This complex interplay between leadership communication and public response underscores the critical need for science-based policy communication and the responsible use of social media platforms to combat misinformation and foster societal unity in the face of a global health crisis. Furthermore, the emergence of the COVID-19 infodemic highlighted the crucial role of social media literacy in combating misinformation. Educating the public on discerning credible information on the web has emerged as a pivotal strategy for mitigating the spread of misinformation and its consequences . Misinformation during public health crises has been a recurring problem. Historical examples from the Ebola outbreak, such as rumors that the virus was a government creation or that certain local practices could cure the disease, highlight how misinformation can hinder public health responses . False beliefs, such as that drinking salt water would cure Ebola or that the disease was spread through the air, led to a mistrust of health workers and avoidance of treatment centers, exacerbating the crisis . In the context of COVID-19, misinformation was particularly pervasive, with false claims about the effectiveness of various nostrums, leading to panic buying and shortages . The impact of such misinformation varied across regions . These dynamics were often fueled by psychological and social factors, including fear, uncertainty, and the reinforcing nature of social media algorithms, which created echo chambers of false information . The wide-ranging consequences affected not only immediate health behaviors but also the trust in, and response to, public health authorities . Misinformation during a public health crisis is nothing new. However, the scale and speed at which misinformation spread during the COVID-19 pandemic are unparalleled. This situation was exacerbated by the widespread use of social media and the internet, where rumors can rapidly reach large audiences . This spread of misinformation had far-reaching consequences: it undermined public health efforts, promoted harmful practices, contributed to vaccine hesitancy, and possibly prolonged the pandemic . These effects went beyond individual health behaviors; they influenced public health policies and diminished trust in health authorities and the scientific community . In light of these challenges, the machine learning–enhanced graph analytics (MEGA) framework has emerged as a novel approach to managing infodemics by leveraging the power of machine learning and graph analytics. This framework offers a robust method for detecting spambots and influential spreaders in social media networks, which is crucial for assessing and mitigating the risks associated with infodemics. Such advanced tools are essential for public health officials and policy makers to navigate the complex landscape of misinformation and to develop more effective communication strategies . Furthermore, combating this infodemic necessitates a strategic approach encapsulating the “Four Pillars of Infodemic Management”: (1) monitoring information (infoveillance) to track the spread and impact of misinformation; (2) enhancing eHealth literacy and science literacy, empowering individuals to evaluate information critically; (3) refining knowledge quality through processes such as fact checking and peer review, ensuring the reliability of information; and (4) ensuring timely and accurate knowledge translation, minimizing the distortion by political or commercial interests . These measures are essential for mitigating the impact of misinformation and guiding the public and professionals toward quality health information during the pandemic and beyond. The COVID-19 pandemic has highlighted the need for improved public health communication and preparedness strategies, particularly in countering misinformation to prevent similar challenges in future health crises . Recognizing the unique challenges posed by the COVID-19 infodemic, this comprehensive scoping review seeks to systematically explore various dimensions of misinformation related to the pandemic. Our investigation is informed by a critical analysis of existing literature, noting a gap in studies that collectively examine the themes, sources, target audiences, impacts, interventions, and effectiveness of public health communication strategies against COVID-19 misinformation. To the best of our knowledge, this is the first review that attempts to bridge this gap by providing a comprehensive and integrated analysis of these key dimensions. While individual aspects of misinformation have been addressed in prior research, there lacks a comprehensive review that integrates these components to offer a holistic understanding necessary for effective countermeasures. Therefore, our review is structured around four pertinent questions, each carefully selected for their significance in advancing our understanding of the COVID-19 infodemic and its counteraction: What is the extent of COVID-19 misinformation? How can it be addressed? What are the primary sources of COVID-19 misinformation? Which target audiences are most affected by COVID-19 misinformation? What public health communication strategies are being used to combat COVID-19 misinformation? These questions were selected to emphasize critical areas of COVID-19 misinformation that, when addressed, can significantly contribute to bridging technical and knowledge gaps in our response to current and future public health emergencies. By detailing our study’s contributions to existing literature, we aim to present distinctive understandings crucial for policy makers, health professionals, and the public in effectively addressing misinformation challenges. This scoping review was conducted following the methodology framework defined by Arksey and O’Malley and elaborated upon by Levac et al . This framework, recognized for its systematic approach, involves five stages: (1) defining the research question; (2) identifying relevant studies; (3) selecting appropriate literature; (4) charting the data; and (5) collating, summarizing, and reporting the results. Databases and Search Strategies The literature search targeted 3 major databases: MEDLINE (PubMed), Embase, and Scopus. These databases were selected for their comprehensive coverage of medical, health, and social science literature. The search strategy was crafted using a combination of keywords and subject headings related to COVID-19, misinformation, and public health communication. We used (“COVID-19” OR “SARS-CoV-2” OR “Coronavirus”) AND (“Misinformation” OR “Disinformation” OR “Fake news” OR “Infodemic”) AND (“Public health outcomes” OR “Health impacts”) AND (“Communication strategies” OR “Public health communication”). Eligibility Criteria The inclusion and exclusion criteria are presented in . Inclusion and exclusion criteria. Inclusion criteria Article type: peer-reviewed studies Language: published in English Publication date: published between December 1, 2019, and September 30, 2023 Focus: addresses COVID-19 misinformation and its sources, themes, and target audiences, as well as the effectiveness of public health communication strategies Study design: empirical studies (eg, cross-sectional, observational, randomized controlled trials, qualitative, and mixed methods) Exclusion criteria Article type: non–peer-reviewed articles, opinion pieces, and editorials Language: published in languages other than English Publication date: published before December 1, 2019, or after September 30, 2023 Focus: does not address COVID-19 misinformation or its related aspects Study design: case studies and anecdotal reports Study Selection Process The study selection process involved an initial screening of titles and abstracts to eliminate irrelevant studies, followed by a thorough full-text review of the remaining articles. This critical stage was conducted by the authors, each with expertise in public health communication and health services research, thereby enhancing the thoroughness and reliability of the selection process. In cases of disagreement, the reviewers engaged in discussions until a consensus was reached on the inclusion of each article. In addition, we adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines to enhance the thoroughness and transparency of our review (see for the PRISMA-ScR checklist). The literature search targeted 3 major databases: MEDLINE (PubMed), Embase, and Scopus. These databases were selected for their comprehensive coverage of medical, health, and social science literature. The search strategy was crafted using a combination of keywords and subject headings related to COVID-19, misinformation, and public health communication. We used (“COVID-19” OR “SARS-CoV-2” OR “Coronavirus”) AND (“Misinformation” OR “Disinformation” OR “Fake news” OR “Infodemic”) AND (“Public health outcomes” OR “Health impacts”) AND (“Communication strategies” OR “Public health communication”). The inclusion and exclusion criteria are presented in . Inclusion and exclusion criteria. Inclusion criteria Article type: peer-reviewed studies Language: published in English Publication date: published between December 1, 2019, and September 30, 2023 Focus: addresses COVID-19 misinformation and its sources, themes, and target audiences, as well as the effectiveness of public health communication strategies Study design: empirical studies (eg, cross-sectional, observational, randomized controlled trials, qualitative, and mixed methods) Exclusion criteria Article type: non–peer-reviewed articles, opinion pieces, and editorials Language: published in languages other than English Publication date: published before December 1, 2019, or after September 30, 2023 Focus: does not address COVID-19 misinformation or its related aspects Study design: case studies and anecdotal reports The study selection process involved an initial screening of titles and abstracts to eliminate irrelevant studies, followed by a thorough full-text review of the remaining articles. This critical stage was conducted by the authors, each with expertise in public health communication and health services research, thereby enhancing the thoroughness and reliability of the selection process. In cases of disagreement, the reviewers engaged in discussions until a consensus was reached on the inclusion of each article. In addition, we adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines to enhance the thoroughness and transparency of our review (see for the PRISMA-ScR checklist). Overview A total of 390 articles were identified from the 3 databases, of which, after removing 134 (34.4%) duplicates, 256 (65.6%) articles remained. Of these 256 articles, 69 (27%) were selected based on abstract searches. Of the 69 full-text articles, 27 (39%) were assessed for eligibility. Of these 27 studies, 21 (78%) were included in the scoping review . This analysis of the 21 studies provides a comprehensive overview of the many impacts of misinformation during the COVID-19 pandemic, including its characteristics, themes, sources, effects, and public health communication strategies. Study Characteristics The included studies exhibited considerable diversity in terms of their methodologies, geographic focus, and objectives . Verma et al conducted a large-scale observational study in the United States, analyzing social media data from >76,000 users of Twitter (subsequently rebranded X) to establish a causal link between misinformation sharing and increased anxiety. By contrast, Loomba et al carried out a randomized controlled trial in both the United Kingdom and the United States to examine the impact of misinformation on COVID-19 vaccination intent across different sociodemographic groups. In the United States, Bokemper et al used randomized trials to assess the efficacy of various public health messages in promoting social distancing. Xue et al used observational methods to explore public attitudes toward COVID-19 vaccines and the role of fact-checking information on social media. These studies collectively used quantitative analysis, web-based surveys, cross-sectional studies, and social network analysis, reflecting the diversity of research approaches. Sample sizes ranged from hundreds to tens of thousands of participants, providing a broad view of the infodemic’s impact. Notably, most of the studies (17/21, 81%) were conducted on the web, underlining the infodemic’s digital nature. The outcomes assessed various public health aspects, including mental health, communication effectiveness, and behavior change. Kumar et al used social network and topic modeling analyses to gain insights into public perceptions on Reddit, contributing to the methodological diversity within the reviewed literature. Misinformation Themes and Sources Misinformation Themes The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . Sources of Misinformation The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . Target Audience of Misinformation The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . Impacts of Misinformation on Public Health Outcomes Identified Negative Impact The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . Measured Outcomes The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . Potential Contributing Factors The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . Public Health Communication Strategies and Their Effectiveness Intervention Strategies The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . Intervention Methods The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . Platform or Channel for Communication The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . Effectiveness Metrics and Reported Effectiveness In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . Recommendations, Gaps, and Future Directions Recommendations for Addressing COVID-19 Misinformation The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . Identified Gaps in Addressing Misinformation The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Proposed Future Research and Actions Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . A total of 390 articles were identified from the 3 databases, of which, after removing 134 (34.4%) duplicates, 256 (65.6%) articles remained. Of these 256 articles, 69 (27%) were selected based on abstract searches. Of the 69 full-text articles, 27 (39%) were assessed for eligibility. Of these 27 studies, 21 (78%) were included in the scoping review . This analysis of the 21 studies provides a comprehensive overview of the many impacts of misinformation during the COVID-19 pandemic, including its characteristics, themes, sources, effects, and public health communication strategies. The included studies exhibited considerable diversity in terms of their methodologies, geographic focus, and objectives . Verma et al conducted a large-scale observational study in the United States, analyzing social media data from >76,000 users of Twitter (subsequently rebranded X) to establish a causal link between misinformation sharing and increased anxiety. By contrast, Loomba et al carried out a randomized controlled trial in both the United Kingdom and the United States to examine the impact of misinformation on COVID-19 vaccination intent across different sociodemographic groups. In the United States, Bokemper et al used randomized trials to assess the efficacy of various public health messages in promoting social distancing. Xue et al used observational methods to explore public attitudes toward COVID-19 vaccines and the role of fact-checking information on social media. These studies collectively used quantitative analysis, web-based surveys, cross-sectional studies, and social network analysis, reflecting the diversity of research approaches. Sample sizes ranged from hundreds to tens of thousands of participants, providing a broad view of the infodemic’s impact. Notably, most of the studies (17/21, 81%) were conducted on the web, underlining the infodemic’s digital nature. The outcomes assessed various public health aspects, including mental health, communication effectiveness, and behavior change. Kumar et al used social network and topic modeling analyses to gain insights into public perceptions on Reddit, contributing to the methodological diversity within the reviewed literature. Misinformation Themes The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . Sources of Misinformation The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . Target Audience of Misinformation The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . The results of the studies reported many themes that presented a diverse and interconnected landscape of COVID-19 misinformation. A significant amount of this misinformation related to the virus’s origins and transmission, with theories varying from accidental laboratory releases to purported links with 5G technology. These theories often reflected a tendency to misinterpret scientific data or attribute the pandemic to external and frequently sensational causes . A significant proportion of misinformation concerned treatments and preventives for COVID-19, where unscientific remedies (accidental or deliberate) and vitamin supplements were touted as effective . This was coupled with widespread misconceptions and conspiracy theories about COVID-19 vaccines . Public health measures such as the effectiveness of masks and social distancing were often mischaracterized or misrepresented, sometimes due to political and economic theories . Social media played a significant role in amplifying dangerous beliefs and practices . The studies demonstrate that misinformation during the pandemic ranged from basic misunderstandings to elaborate conspiracy theories . The studies provide a comprehensive analysis of the various sources of COVID-19 misinformation, with a particular focus on social media platforms such as Facebook, WhatsApp, Twitter, Reddit, and YouTube, which were repeatedly identified as primary channels for spreading false information . These platforms not only facilitated the spread of misinformation through user-generated content but also through public figures and political leaders, whose remarks often fueled rumors and unsubstantiated claims . Traditional media sources, including television, newspapers, and radio, also added to the misinformation landscape, either by directly spreading false information or by passing on misleading statements and rumors . The influence of informal networks, such as family, friends, and community gatherings, was highlighted, pointing to the significance of word-of-mouth communication in the dissemination of misinformation . Furthermore, the studies identified specific web-based communities and forums, such as Facebook groups and subreddits, where misinformation was not only shared but also reinforced within echo chambers . The selected studies revealed a complex landscape of COVID-19 misinformation targeting diverse audiences, with a significant focus on the general public across countries; for instance, Datta et al and Hou et al identified both health care professionals and the broader global population, including those in China, the United States, and countries with traditional medicine practices, as key recipients of misinformation . Susceptibility to misinformation was also observed in individuals with low health literacy, depression, or susceptibility to conspiracy theories or vaccine-hesitant individuals and those with a mistrust of vaccines . Digital platforms played a significant role in shaping public perceptions, with studies highlighting the impact of misinformation on social media users, online forum participants, and those engaging with user-generated content . Moreover, specific populations such as Serbian adults, American women, racial minority individuals, students, public health professionals, and essential workers were reported as being particularly affected or targeted by misinformation campaigns . Identified Negative Impact The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . Measured Outcomes The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . Potential Contributing Factors The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . The findings presented many negative effects of misinformation on public health . One primary consequence was the impact on health care professionals, who faced challenges in discerning accurate information, leading to disruptions in routine decision-making and care practices . The public was also affected, with misdirected responses and increased reliance on unproven remedies, indicating missed opportunities for effective epidemic control . Misinformation significantly disrupted health and risk communication, contributing to social unrest and heightened anxiety . It also directly impacted public health measures, as evidenced by lower intent to accept COVID-19 vaccines , reduced adherence to official health guidelines , and noncompliance with basic preventive measures such as handwashing . The spread of misinformation resulted in decreased public trust in science , undermining the effectiveness of public health messaging and leading to increased vaccine hesitancy . This hesitancy was further exacerbated by the promotion of antivaccine propaganda, posing a barrier to achieving herd immunity . The extent of the impact of misinformation was also evident in the public’s mental health, with reports of increased anxiety, suicidal thoughts, and distress , as well as in overall public attitudes toward the pandemic and changes in public attitudes toward vaccines, which became increasingly negative over time . The studies highlighted the challenges that individuals and communities faced in navigating the pandemic amid a flood of misinformation . It was reported that misinformation significantly impacted health care professionals, leading to discomfort, distraction, and difficulty in discerning accurate information. This impact affected decision-making and routine practices . The public’s response was manifested by changes in search behaviors and purchasing patterns, reflecting the influence of rumors and celebrity endorsements . It was reported that “fake news” significantly affected the information landscape, skewing the perception of truth versus lies . Hesitancy was reported in intent to receive COVID-19 vaccines across demographic groups . The misinformation also altered health behaviors, such as handwashing and the use of disinfectants, and influenced preventive behavioral intentions . It was also reported that misinformation affected public adherence to COVID-19 prevention, risk avoidance behaviors, and vaccination intentions . The communication strategies during quarantine, public trust and engagement with authorities, and compliance with quarantine measures were influenced by the level of concern, which was shaped by misinformation . It was reported that misinformation led to changes in social distancing and mask wearing . Social media platforms exhibited a prevalence of antivaccine content and a focus on misinformation in web-based discussions . The studies also reported that emotional and linguistic features in vaccine-related posts influenced public attitudes toward vaccines, reflecting the impact of different information sources . Anxiety levels were heightened due to exposure to misinformation, especially among specific demographic groups . Some of the studies (2/21, 10%) found that misinformation affected public trust in health experts and government and altered the perceived severity of COVID-19 . The studies identified a wide array of factors that contributed to the spread of misinformation during the pandemic . Key among these were social media and connections with family and friends, which hastened the spread of unregulated information . The issue was further compounded by delayed and nontransparent communication from health authorities, coupled with the absence of early, authoritative responses . Cognitive biases, a lack of digital and health literacy, and the exploitation of social divisions also played significant roles . Factors such as sociodemographic characteristics, trust in information sources, the frequency of social media use, and the nature of misinformation were important . The spread of misinformation was also influenced by gender, education level, and the distinction between urban and rural living , as well as age, the effectiveness of media channels, the initial understanding of SARS-CoV-2, and trust in authorities, particularly in relation to quarantine measures . Contributing factors included beliefs in conspiracy theories, cognitive intuition, an overestimation of COVID-19 knowledge, and susceptibility to cognitive biases , alongside political orientation and religious commitment . Public behavior was also shaped by concerns about government infringement on personal freedoms . Finally, exposure to fake news and conspiracy stories , cultural attitudes toward government mandates, and the spread of misinformation through social media were noted . Intervention Strategies The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . Intervention Methods The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . Platform or Channel for Communication The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . Effectiveness Metrics and Reported Effectiveness In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . The studies highlighted the critical role of effective public health communication strategies in addressing COVID-19 misinformation . This included a range of approaches such as enhancing health literacy and reinforcing social media policies against fake news , along with using fact checking and empathetic communication to debunk misinformation . The importance of timely and accurate information dissemination, particularly through social media, was also noted as a crucial component for authoritative communication . In addition, several studies advocated for tailored communication approaches. These approaches involve targeting specific misinformed subgroups , using infographics to clarify scientific processes , and focusing on community protection while reframing reckless behaviors . Essential strategies included training health care professionals to accurately identify credible information, alongside implementing media literacy campaigns and prioritizing groups considered vulnerable in public communication . Engaging skeptics, particularly vaccine skeptics, through interventions was reported as essential , with an emphasis on debunking misinformation, promoting credible information sources, and reducing exposure to misinformation . The included studies reported various intervention methods to combat misinformation. Key strategies included the use of credible sources , the implementation of targeted campaigns, and the integration of digital technologies such as social media tools and algorithmic analyses . Educational efforts, ranging from basic loudspeaker announcements to sophisticated web-based educational tools and infographics, were also reported to be effective . The importance of engaging the public through surveys, randomized interventions, and peer discussions was noted . Fact checking, in partnership with third-party organizations and through internal processes, was highlighted as crucial, along with the need for empathetic communication . Finally, some of the studies (2/21, 10%) showed the importance of identifying predictors and using analytical models to refine strategies and better understand public sentiment . The studies reported that a diverse array of platforms and channels played a crucial role in effective communication during the COVID-19 pandemic . Digital and social media platforms, such as Facebook, Reddit, and YouTube, were extensively used to disseminate facts and counter misinformation, as noted by numerous studies (8/21, 38%) . Government websites and official channels, alongside health care settings, were also acknowledged for their value in providing reliable and accurate information . Traditional media forms, including television, radio, and print, were found to be crucial in reaching wide audiences . Web-based platforms designed for research and surveys, such as Prolific, played a key role in gauging public perceptions and addressing misinformation . Furthermore, community networks and personal communications were identified as essential, particularly in village health volunteer networks and through engagement with health professionals and academics, demonstrating remarkable effectiveness in local communities and areas with limited digital access . In studies on public health communication during the pandemic, effectiveness metrics focused on reducing misinformation and improving health behaviors . Detailed engagement metrics included tracking interactions with verified versus fake news, changes in vaccination intent, and shifts in public attitudes toward vaccines over time . Unique metrics such as internet search trends correlating with public behavior, adherence to health guidelines, and the impact of misinformation on mental health were also explored . Studies such as that by Gruzd et al analyzed social media for misinformation removal and provaccine content. The reported effectiveness of interventions such as fact checking and clear communication varied across the studies, influencing vaccine attitudes and trust in science to varying degrees . Some of the studies (8/21, 38%) pointed to increased public support for measures such as quarantine, emphasizing the role of community engagement , but also noted challenges in maintaining long-term effectiveness and addressing various reactions such as anxiety in response to misinformation . These studies, often based on computational analyses, existing literature, and theoretical models, highlighted the complex, multifaceted nature of public health communication during the pandemic . Recommendations for Addressing COVID-19 Misinformation The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . Identified Gaps in Addressing Misinformation The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Proposed Future Research and Actions Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . The included studies recommended a comprehensive approach that included strategic public health communication, educational initiatives, and policy adaptation . Key themes included effective information regulation and enhancing discernment skills among health care professionals as well as the general public , while strategies included considering platform-specific and demographic-focused approaches to combat misinformation . Governmental leadership and international coordination were considered crucial , and educational strategies were recommended to focus on improving health literacy and researching misinformation inoculation . Public health messaging and web-based moderation policies were deemed effective , and technological interventions and comprehensive policy making were recommended . Methodological research to understand extended debates and debunking techniques was emphasized , as well as tailored communication and messaging strategies . The studies highlighted several gaps in managing COVID-19 misinformation and public health communication. Challenges included distinguishing authentic information from misinformation, the persistence of fake news, and the presence of echo chambers in social media networks . Timely, actionable advice for personal protection and effective risk communication during the early stages of the pandemic was lacking . Research limitations included a lack of real-world simulation, leading to challenges in generalizability . There was insufficient understanding of the role of health authorities as trusted sources, media preference during crises, and the effectiveness of information dissemination in different regions . Challenges arising from legal and ethical considerations, resource limitations, disparities in education access, and insufficient exploration of the relationship between misinformation and vaccine acceptance were also noted . Future research directions included developing guidelines for medical information dissemination, enhancing crisis communication skills among health care professionals, and creating targeted interventions based on demographics . Evaluating the impact of governmental and international organization communications, conducting research within social media settings, and analyzing the impact of misinformation more accurately were recommended . Studying media habits during crises, examining long-term behavioral changes after quarantine, and dissecting the influential aspects of messages were suggested . Investigating psychological factors, evaluating emotional appeals in health communication, and developing strategies for credible sources to enhance their social media influence were proposed . Ethically and legally compliant technological interventions, efficient resource allocation policies, and extensive studies on psychological impacts were recommended . Mourali and Drake proposed quantifying extended debates, studying message elements and sources, and exploring “prebunking.” Longitudinal studies, research on user engagement with social media content, and interventions to mitigate misinformation effects were highlighted . Finally, the studies suggested a holistic approach involving collaboration among companies, governments, and users; continuous monitoring of misinformation trends; regular fact checking; legal actions against sources of misinformation; and specific communications to debunk myths . Principal Findings Our study underscores the profound influence of misinformation during the COVID-19 pandemic, particularly in shaping public responses. Misinformation, primarily propagated through social media, led to widespread misconceptions about the severity of COVID-19 infection, triggering public confusion, reluctance to adhere to health guidelines, and increased vaccine hesitancy. This phenomenon significantly impacted vaccine uptake rates. Gallotti et al highlighted the simultaneous emergence of infodemics alongside pandemics, underlining the critical role of both human and automated (bots) accounts in spreading information of questionable quality on platforms such as Twitter. The authors introduced an Infodemic Risk Index to measure the exposure to unreliable news, showing that the early stages of the COVID-19 pandemic saw a significant spread of misinformation, which only subsided in favor of reliable sources as the infection rates increased . This emphasizes the complex challenge of managing infodemics in tandem with biological pandemics, necessitating adaptive public health communication strategies that are responsive to evolving information landscapes. Our findings resonate with historical observations in public health crises, evidenced by studies on the Zika virus outbreak , polio vaccination efforts in India and Nigeria , and the Middle East respiratory syndrome outbreak . Similar patterns of misinformation were also noted in the H1N1 pandemic and the Ebola outbreak. These instances highlight the critical need for clear, proactive communication strategies to effectively manage misinformation and guide public understanding and responses. The review also reveals a predominant focus on digital misinformation, underscoring the necessity to comprehend the impact of traditional media and word-of-mouth communication in spreading misinformation. While studies such as that by Basch et al have started to address this gap, there is a clear need for more extensive research, particularly on the long-term effects of misinformation on public health behaviors after a pandemic. This shift toward credible information, as observed by Gallotti et al , signals an opportunity for future research to explore capitalizing on changing information consumption patterns in public health messaging. Such observations are crucial for developing effective communication strategies, highlighting the necessity of integrating infodemic management with pandemic response efforts to mitigate misinformation effects and guide public behavior appropriately. The disparity in the effectiveness of misinformation mitigation strategies points to the need for a nuanced understanding of how misinformation evolves over time. Studies, such as that by Vijaykumar et al , highlight the challenges in countering rapidly changing misinformation narratives on digital platforms. Further investigation into the effectiveness of fact checking across different cultures and demographics, as suggested by Chou et al , is essential for developing better strategies to combat misinformation in diverse settings. This review found that various factors, including delayed communication from health authorities, cognitive biases, sociodemographic characteristics, trust in official sources, and political orientation, played a significant role in the spread of misinformation during the pandemic. These findings align with similar observations in other studies. Eysenbach emphasized the importance of trust in government agencies and health care providers in shaping individuals’ beliefs and their willingness to share accurate information during public health crises. In addition, Pennycook and Rand highlighted how political beliefs and affiliations can influence people’s interpretation of information, thus impacting their acceptance or rejection of official guidance during public health crises. The study by Gallotti et al also highlighted the differentiated roles of verified and unverified users on social media in propagating COVID-19–related information. Their analysis shows that verified users began to point more toward reliable sources over time, hinting at the potential of leveraging social media influencers and verified accounts in directing public attention to factual and scientifically verified information . These insights indicate the critical need for dynamic public health strategies that are adaptable and actionable, aimed at curtailing misinformation through education and technology. It is essential to incorporate digital literacy and clear, audience-specific messaging to effectively counter misinformation, a strategy that has proven successful in health crises beyond the COVID-19 pandemic; for example, during the H1N1 pandemic, targeting specific audience segments with tailored messages significantly improved public understanding and guideline compliance . Likewise, during the Ebola outbreak, proactive and transparent strategies were key in dispelling rumors and building trust in public health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings where digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. In examining the authoritarian responses to the pandemic, particularly in Brazil and Turkey, it is evident that leadership tactics significantly contributed to societal polarization and misinformation. Leaders in these countries used the crisis to suppress dissent and consolidate power, often spreading misinformation and underreporting COVID-19 cases, thereby exacerbating public mistrust and confusion . Similarly, a study of communication strategies across countries with high rates of infection emphasized the variation in political leaders’ approaches, where strategies ranged from science-based communications to ideologically influenced messaging . The study highlighted the potential for political leaders to influence public health responses through their communication tactics, further impacting public behavior and trust in health guidelines . In certain situations, the integration of political ideology with public health messaging, as observed in countries such as the United States, Brazil, India, and the United Kingdom, not only perpetuated misinformation but also intensified societal rifts . This highlights the paramount role of leadership in navigating public health crises; for instance, in the United States and Brazil, political leaders’ approaches to the COVID-19 pandemic—characterized by mixed messaging on mask wearing and social distancing—contributed to public confusion and a politicized response to the pandemic. Similarly, the initial underestimation of the virus’s impact in India and the United Kingdom’s delayed lockdown response serve as examples of how political decisions can shape public health outcomes and trust in health authorities, emphasizing the profound impact of aligning political views with public health communication . In addition, the initial reluctance of the World Health Organization to endorse mask wearing, social distancing, and handwashing, followed by a later reversal of these recommendations, exemplifies the challenges and confusion created by global health leadership during the early stages of the pandemic . Such shifts in guidance contributed to the global spread of misinformation, further complicating public health responses and trust in international health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings that digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. Applying the MEGA framework in practical settings could revolutionize public health communication, offering a model for how technology can be harnessed to tackle misinformation more effectively. By processing massive graph data sets and accurately computing infodemic risk scores, MEGA supports the development of targeted communication strategies and interventions. Its approach to preserving crucial feature information through graph neural networks signifies a leap forward in optimizing learning performance, underscoring the framework’s utility in crafting evidence-based policies and initiatives to effectively combat misinformation. This emphasizes the importance of integrating advanced technological solutions, such as MEGA, into public health strategies to enhance the precision and effectiveness of infodemic management . The integration of social media literacy into public health strategies is emphasized as essential by Ziapour et al , suggesting that a populace equipped with advanced media literacy skills exhibits greater resilience against misinformation. Our study reveals the profound impact of the COVID-19 infodemic, which extended beyond public health and eroded trust in health institutions and government authorities. This decline in trust contributed to societal polarization, mirroring the effects seen in the Ebola outbreak, where misinformation led to notable repercussions . Further research, similar to that conducted on the Zika outbreak by Basch et al , is needed to understand the long-term effects of misinformation on societal cohesion and trust. Addressing this evolving landscape of misinformation requires dynamic and adaptable public health policies. These strategies should integrate insights from various methodologies, using both digital and traditional media for greater reach and impact, drawing lessons from the successful strategies deployed during the H1N1 pandemic, such as those highlighted by Chou et al . Our study advocates for a collaborative approach, uniting governments, the private sector, and the public in a concerted effort to combat misinformation, highlighting the importance of joint action in this global challenge. This approach should include continuous monitoring of misinformation trends, implementing regular fact checking, taking legal action against sources of misinformation, and developing specific communications to debunk myths. Similar findings have been reported in studies addressing misinformation related to the Zika virus , yellow fever , and Ebola , emphasizing the importance of a holistic strategy involving all stakeholders . Limitations The review has several limitations to consider. First, there is a temporal limitation because it included only studies published between December 2019 and September 2023, potentially excluding more recent research that could have offered additional insights. Second, the reliance on specific databases (MEDLINE [PubMed], Embase, and Scopus) as the primary sources for data might have led to the omission of pertinent studies that are not indexed in these databases. Third, the study’s sole focus on research articles may have excluded valuable insights from other scholarly works such as conference papers, theses, case studies, and gray literature. Finally, it is important to acknowledge that the study’s restriction to English-language publications may have excluded valuable research conducted in other languages. While efforts were made to review the available literature comprehensively, omitting non-English sources could limit the breadth and depth of the findings. Recognizing these limitations, future endeavors should aim to expand the scope of research beyond these constraints, incorporating a more diverse range of sources, languages, and real-world interventions to enrich our understanding of, and response to, misinformation. Conclusions The results of this review emphasize the significant and complex challenges posed by misinformation during the COVID-19 pandemic. It shows how misinformation can have a wide impact on public health, societal behaviors, and individual mental well-being. The findings highlight the critical role of effective public health communication strategies in addressing the infodemic. It is essential that these strategies are not only targeted and precise but also adaptable and inclusive, ensuring that they are relevant to diverse demographic and sociocultural contexts. The review also emphasizes the need for ongoing collaborative research efforts to further explore the nuances of the misinformation spread and its consequences. This requires cooperation among health authorities, policy makers, communication specialists, and technology experts to develop evidence-based approaches and policies to combat misinformation. Furthermore, the review highlights the importance of refining public health communication strategies to keep up with the ever-changing nature of misinformation, especially in the digital realm. It advocates using advanced technology and data-driven insights to enhance the reach and impact of health communication. By combining scientific rigor, technological innovation, and empathetic communication, these strategies can contribute to building public trust, promoting health literacy, and creating resilient communities capable of recognizing and countering misinformation. In summary, the lessons learned from the COVID-19 pandemic emphasize the necessity of strengthening public health communication infrastructures. This strengthening is vital for addressing the current misinformation crisis and preparing for future public health emergencies. Implementing these recommendations will play a crucial role in shaping a more informed, aware, and health-literate global community better equipped to confront the challenges posed by misinformation in our increasingly interconnected world. Furthermore, future research directions should explore integrating advanced large language models with frameworks similar to MEGA. This exploration will bolster automated fact checking and infodemic risk management, contributing to more effective strategies in combating misinformation in public health communication. Our study underscores the profound influence of misinformation during the COVID-19 pandemic, particularly in shaping public responses. Misinformation, primarily propagated through social media, led to widespread misconceptions about the severity of COVID-19 infection, triggering public confusion, reluctance to adhere to health guidelines, and increased vaccine hesitancy. This phenomenon significantly impacted vaccine uptake rates. Gallotti et al highlighted the simultaneous emergence of infodemics alongside pandemics, underlining the critical role of both human and automated (bots) accounts in spreading information of questionable quality on platforms such as Twitter. The authors introduced an Infodemic Risk Index to measure the exposure to unreliable news, showing that the early stages of the COVID-19 pandemic saw a significant spread of misinformation, which only subsided in favor of reliable sources as the infection rates increased . This emphasizes the complex challenge of managing infodemics in tandem with biological pandemics, necessitating adaptive public health communication strategies that are responsive to evolving information landscapes. Our findings resonate with historical observations in public health crises, evidenced by studies on the Zika virus outbreak , polio vaccination efforts in India and Nigeria , and the Middle East respiratory syndrome outbreak . Similar patterns of misinformation were also noted in the H1N1 pandemic and the Ebola outbreak. These instances highlight the critical need for clear, proactive communication strategies to effectively manage misinformation and guide public understanding and responses. The review also reveals a predominant focus on digital misinformation, underscoring the necessity to comprehend the impact of traditional media and word-of-mouth communication in spreading misinformation. While studies such as that by Basch et al have started to address this gap, there is a clear need for more extensive research, particularly on the long-term effects of misinformation on public health behaviors after a pandemic. This shift toward credible information, as observed by Gallotti et al , signals an opportunity for future research to explore capitalizing on changing information consumption patterns in public health messaging. Such observations are crucial for developing effective communication strategies, highlighting the necessity of integrating infodemic management with pandemic response efforts to mitigate misinformation effects and guide public behavior appropriately. The disparity in the effectiveness of misinformation mitigation strategies points to the need for a nuanced understanding of how misinformation evolves over time. Studies, such as that by Vijaykumar et al , highlight the challenges in countering rapidly changing misinformation narratives on digital platforms. Further investigation into the effectiveness of fact checking across different cultures and demographics, as suggested by Chou et al , is essential for developing better strategies to combat misinformation in diverse settings. This review found that various factors, including delayed communication from health authorities, cognitive biases, sociodemographic characteristics, trust in official sources, and political orientation, played a significant role in the spread of misinformation during the pandemic. These findings align with similar observations in other studies. Eysenbach emphasized the importance of trust in government agencies and health care providers in shaping individuals’ beliefs and their willingness to share accurate information during public health crises. In addition, Pennycook and Rand highlighted how political beliefs and affiliations can influence people’s interpretation of information, thus impacting their acceptance or rejection of official guidance during public health crises. The study by Gallotti et al also highlighted the differentiated roles of verified and unverified users on social media in propagating COVID-19–related information. Their analysis shows that verified users began to point more toward reliable sources over time, hinting at the potential of leveraging social media influencers and verified accounts in directing public attention to factual and scientifically verified information . These insights indicate the critical need for dynamic public health strategies that are adaptable and actionable, aimed at curtailing misinformation through education and technology. It is essential to incorporate digital literacy and clear, audience-specific messaging to effectively counter misinformation, a strategy that has proven successful in health crises beyond the COVID-19 pandemic; for example, during the H1N1 pandemic, targeting specific audience segments with tailored messages significantly improved public understanding and guideline compliance . Likewise, during the Ebola outbreak, proactive and transparent strategies were key in dispelling rumors and building trust in public health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings where digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. In examining the authoritarian responses to the pandemic, particularly in Brazil and Turkey, it is evident that leadership tactics significantly contributed to societal polarization and misinformation. Leaders in these countries used the crisis to suppress dissent and consolidate power, often spreading misinformation and underreporting COVID-19 cases, thereby exacerbating public mistrust and confusion . Similarly, a study of communication strategies across countries with high rates of infection emphasized the variation in political leaders’ approaches, where strategies ranged from science-based communications to ideologically influenced messaging . The study highlighted the potential for political leaders to influence public health responses through their communication tactics, further impacting public behavior and trust in health guidelines . In certain situations, the integration of political ideology with public health messaging, as observed in countries such as the United States, Brazil, India, and the United Kingdom, not only perpetuated misinformation but also intensified societal rifts . This highlights the paramount role of leadership in navigating public health crises; for instance, in the United States and Brazil, political leaders’ approaches to the COVID-19 pandemic—characterized by mixed messaging on mask wearing and social distancing—contributed to public confusion and a politicized response to the pandemic. Similarly, the initial underestimation of the virus’s impact in India and the United Kingdom’s delayed lockdown response serve as examples of how political decisions can shape public health outcomes and trust in health authorities, emphasizing the profound impact of aligning political views with public health communication . In addition, the initial reluctance of the World Health Organization to endorse mask wearing, social distancing, and handwashing, followed by a later reversal of these recommendations, exemplifies the challenges and confusion created by global health leadership during the early stages of the pandemic . Such shifts in guidance contributed to the global spread of misinformation, further complicating public health responses and trust in international health authorities . These approaches, based on an understanding of the target audience’s concerns and media habits, are consistent with our findings that digital literacy and targeted messaging played a critical role in mitigating COVID-19 misinformation effects. Such strategies are vital not only for immediate crisis response but also for fostering long-term resilience in public health communication, helping to enable the public to distinguish credible information from misinformation, with the ultimate goal of enhancing public health outcomes and trust in health authorities. Applying the MEGA framework in practical settings could revolutionize public health communication, offering a model for how technology can be harnessed to tackle misinformation more effectively. By processing massive graph data sets and accurately computing infodemic risk scores, MEGA supports the development of targeted communication strategies and interventions. Its approach to preserving crucial feature information through graph neural networks signifies a leap forward in optimizing learning performance, underscoring the framework’s utility in crafting evidence-based policies and initiatives to effectively combat misinformation. This emphasizes the importance of integrating advanced technological solutions, such as MEGA, into public health strategies to enhance the precision and effectiveness of infodemic management . The integration of social media literacy into public health strategies is emphasized as essential by Ziapour et al , suggesting that a populace equipped with advanced media literacy skills exhibits greater resilience against misinformation. Our study reveals the profound impact of the COVID-19 infodemic, which extended beyond public health and eroded trust in health institutions and government authorities. This decline in trust contributed to societal polarization, mirroring the effects seen in the Ebola outbreak, where misinformation led to notable repercussions . Further research, similar to that conducted on the Zika outbreak by Basch et al , is needed to understand the long-term effects of misinformation on societal cohesion and trust. Addressing this evolving landscape of misinformation requires dynamic and adaptable public health policies. These strategies should integrate insights from various methodologies, using both digital and traditional media for greater reach and impact, drawing lessons from the successful strategies deployed during the H1N1 pandemic, such as those highlighted by Chou et al . Our study advocates for a collaborative approach, uniting governments, the private sector, and the public in a concerted effort to combat misinformation, highlighting the importance of joint action in this global challenge. This approach should include continuous monitoring of misinformation trends, implementing regular fact checking, taking legal action against sources of misinformation, and developing specific communications to debunk myths. Similar findings have been reported in studies addressing misinformation related to the Zika virus , yellow fever , and Ebola , emphasizing the importance of a holistic strategy involving all stakeholders . The review has several limitations to consider. First, there is a temporal limitation because it included only studies published between December 2019 and September 2023, potentially excluding more recent research that could have offered additional insights. Second, the reliance on specific databases (MEDLINE [PubMed], Embase, and Scopus) as the primary sources for data might have led to the omission of pertinent studies that are not indexed in these databases. Third, the study’s sole focus on research articles may have excluded valuable insights from other scholarly works such as conference papers, theses, case studies, and gray literature. Finally, it is important to acknowledge that the study’s restriction to English-language publications may have excluded valuable research conducted in other languages. While efforts were made to review the available literature comprehensively, omitting non-English sources could limit the breadth and depth of the findings. Recognizing these limitations, future endeavors should aim to expand the scope of research beyond these constraints, incorporating a more diverse range of sources, languages, and real-world interventions to enrich our understanding of, and response to, misinformation. The results of this review emphasize the significant and complex challenges posed by misinformation during the COVID-19 pandemic. It shows how misinformation can have a wide impact on public health, societal behaviors, and individual mental well-being. The findings highlight the critical role of effective public health communication strategies in addressing the infodemic. It is essential that these strategies are not only targeted and precise but also adaptable and inclusive, ensuring that they are relevant to diverse demographic and sociocultural contexts. The review also emphasizes the need for ongoing collaborative research efforts to further explore the nuances of the misinformation spread and its consequences. This requires cooperation among health authorities, policy makers, communication specialists, and technology experts to develop evidence-based approaches and policies to combat misinformation. Furthermore, the review highlights the importance of refining public health communication strategies to keep up with the ever-changing nature of misinformation, especially in the digital realm. It advocates using advanced technology and data-driven insights to enhance the reach and impact of health communication. By combining scientific rigor, technological innovation, and empathetic communication, these strategies can contribute to building public trust, promoting health literacy, and creating resilient communities capable of recognizing and countering misinformation. In summary, the lessons learned from the COVID-19 pandemic emphasize the necessity of strengthening public health communication infrastructures. This strengthening is vital for addressing the current misinformation crisis and preparing for future public health emergencies. Implementing these recommendations will play a crucial role in shaping a more informed, aware, and health-literate global community better equipped to confront the challenges posed by misinformation in our increasingly interconnected world. Furthermore, future research directions should explore integrating advanced large language models with frameworks similar to MEGA. This exploration will bolster automated fact checking and infodemic risk management, contributing to more effective strategies in combating misinformation in public health communication. |
Clinical Outcome Discrimination in Pediatric ARDS by Chest Radiograph Severity Scoring | d98e518a-23d9-42fa-bf08-b6392ccf97dc | 9124134 | Pediatrics[mh] | Acute respiratory distress syndrome (ARDS) is a complex syndrome with heterogeneous causes and diseases and carries high rates of morbidity and mortality . The largest PARDS validation (PARDIE study) showed the International Pediatric ARDS (PARDS) incidence was 3.2% amongst pediatric intensive care units (PICU) patients and the mortality for severe PARDS was up to 33% . According to Pediatric Acute Lung Injury Consensus Conference (PALICC) PARDS definition, not only lung mechanics, oximetry, and blood gases should be noted but also the chest imaging . The image pattern of PARDS can be unilateral or bilateral pulmonary infiltrates. Although image manifestations frequently lag behind the development of hypoxemia, the different distribution pattern can help choose specific ventilatory setting, monitor therapeutic response, and even predict clinical outcome . The modality of imaging evaluation of PARDS includes chest radiograph (CXR), CT, and ultrasound. Despite CT is the gold standard to demonstrate precise morphology of lung ventilation, the safety issue for patient transfer and radiation exposure limits its utility . As a radiation-free and noninvasive exam, transthoracic lung ultrasound (LUS) shows the convenience in PARDS evaluation . Subcutaneous emphysema, large thoracic dressings, providers' skills, and experience might limit its efficiency in particularly cases . CXR remains an essential role in clinical practice. Since the extent and degree of alveolar damage on CXR reflect the severity, Warren and colleagues established a radiographic assessment of lung edema (RALE) scoring method in adult, enriching a novel tool to predict the prognosis in ARDS . After its establishment, relevant studies on adults were published . However, to our knowledge, the study of RALE score validation on children is still rare. Herein, the study aimed to assess the severity and prognosis of the children who met the criteria of PARDS. Furthermore, compare the consistency utilized by radiologist and pediatrician, investigate the relations with CXR and severity, and discriminate the prognosis based on the RALE scoring method.
2.1. Study Design This study was a single‐center retrospective observational study in nature between January 1 st 2018 to June 30 th 2021. Institution ethical committee approval (KSSHERLL2018005) was taken prior to commencement of study. The informed consent was obtained. 2.2. Participant Recruitment Patients admitted to PICU were eligible for the study if they met PALICC PARDS diagnostic criteria, received strictly invasive mechanical ventilation (IMV), had bedside CXR exams, and etiology of pulmonary infections proven by sample culture and/or DNA quantitative polymerase chain reaction (PCR) testing (bacterial/viral/fungal). The exclusion criteria included age ≤28 days old, admission time less than 24 h, received extracorporeal membrane oxygenation (ECMO) therapy, special populations, namely, patients with cyanotic heart disease, chronic lung disease, and left-ventricular dysfunction, and incomplete clinical or CXR data. 2.3. Data Collection Patient's data were recorded and compared as follows: general demographics, including age, sex, etiology of ARDS, intubation time, oxygenation index (OI), positive end-expiratory pressure (PEEP), and SpO 2 ; number of performed CXRs and CXR RALE scores; pulmonary complications, namely, air-leak syndrome, pleural effusion, and alveolar hemorrhage; and prognosis, 28‐day mortality. Subgroups were divided according to the prognosis (survival and death). Infection and pulmonary complications were recorded as risk factors to compare for the discrimination. 2.4. CXR RALE Scoring Each CXR was divided into four quadrants, vertically by the midline of spine and horizontally at the level of left upper and lingual lobe bronchus (first branch of the left main bronchus). Based on the RALE score, the extent (consolidation score) and degree (density score) of each quadrant will be calculated, respectively, as follows : consolidation scores 0 (none alveolar opacity), 1 (extent <25%), 2 (extent 25%–50%), 3 (50%–75%), and 4 (>75%) and density scores 1 (hazy), 2 (moderate), and 3 (dense). Quadrant score equals consolidation score times density score. Total score equals to the sum of four quadrants scores, as shown in Figures and . According to PALICC PARDS criteria, patients with unilateral pulmonary infiltrate were also subjected to the RALE method. Each CXR was scored independently by two observers (a radiologist with 17 years' experience and an advanced pediatrician with 14 years' experience), in order to evaluate interobserver variation. Define day 1 (since intubation) as baseline. If multiple CXRs were performed in a single day, the most severe one for scoring was selected by the observers. 2.5. Statistics All statistical analyses were performed with Jeffrey's Amazing Statistics Program (JASP, version 0.14.1). All continuous variables that conform to the normal distribution were expressed by mean ± standard deviation ( x ¯ ± s ). Variables with an abnormal distribution were described with the median value (median, interquartile range, 25–75%). We used the two-way random model (absolute agreement type) to calculate intraclass correlation coefficient (ICC) to assess the reliability of two independent observers. Bland–Altman plots were used to show the agreement of independent observers. The chi-square test was used to compare sex, infection, and pulmonary complication. The t -test was to compare age, intubation time, OI, PEEP, SpO 2 , and RALE scores. Receiver operating characteristic curve (ROC) analysis was performed, and the area under the ROC curve (AUC) was calculated. Cox regression (which was based on the proportional-hazards model) was used to calculate the risks. The level of significance was set to 0.05.
This study was a single‐center retrospective observational study in nature between January 1 st 2018 to June 30 th 2021. Institution ethical committee approval (KSSHERLL2018005) was taken prior to commencement of study. The informed consent was obtained.
Patients admitted to PICU were eligible for the study if they met PALICC PARDS diagnostic criteria, received strictly invasive mechanical ventilation (IMV), had bedside CXR exams, and etiology of pulmonary infections proven by sample culture and/or DNA quantitative polymerase chain reaction (PCR) testing (bacterial/viral/fungal). The exclusion criteria included age ≤28 days old, admission time less than 24 h, received extracorporeal membrane oxygenation (ECMO) therapy, special populations, namely, patients with cyanotic heart disease, chronic lung disease, and left-ventricular dysfunction, and incomplete clinical or CXR data.
Patient's data were recorded and compared as follows: general demographics, including age, sex, etiology of ARDS, intubation time, oxygenation index (OI), positive end-expiratory pressure (PEEP), and SpO 2 ; number of performed CXRs and CXR RALE scores; pulmonary complications, namely, air-leak syndrome, pleural effusion, and alveolar hemorrhage; and prognosis, 28‐day mortality. Subgroups were divided according to the prognosis (survival and death). Infection and pulmonary complications were recorded as risk factors to compare for the discrimination.
Each CXR was divided into four quadrants, vertically by the midline of spine and horizontally at the level of left upper and lingual lobe bronchus (first branch of the left main bronchus). Based on the RALE score, the extent (consolidation score) and degree (density score) of each quadrant will be calculated, respectively, as follows : consolidation scores 0 (none alveolar opacity), 1 (extent <25%), 2 (extent 25%–50%), 3 (50%–75%), and 4 (>75%) and density scores 1 (hazy), 2 (moderate), and 3 (dense). Quadrant score equals consolidation score times density score. Total score equals to the sum of four quadrants scores, as shown in Figures and . According to PALICC PARDS criteria, patients with unilateral pulmonary infiltrate were also subjected to the RALE method. Each CXR was scored independently by two observers (a radiologist with 17 years' experience and an advanced pediatrician with 14 years' experience), in order to evaluate interobserver variation. Define day 1 (since intubation) as baseline. If multiple CXRs were performed in a single day, the most severe one for scoring was selected by the observers.
All statistical analyses were performed with Jeffrey's Amazing Statistics Program (JASP, version 0.14.1). All continuous variables that conform to the normal distribution were expressed by mean ± standard deviation ( x ¯ ± s ). Variables with an abnormal distribution were described with the median value (median, interquartile range, 25–75%). We used the two-way random model (absolute agreement type) to calculate intraclass correlation coefficient (ICC) to assess the reliability of two independent observers. Bland–Altman plots were used to show the agreement of independent observers. The chi-square test was used to compare sex, infection, and pulmonary complication. The t -test was to compare age, intubation time, OI, PEEP, SpO 2 , and RALE scores. Receiver operating characteristic curve (ROC) analysis was performed, and the area under the ROC curve (AUC) was calculated. Cox regression (which was based on the proportional-hazards model) was used to calculate the risks. The level of significance was set to 0.05.
3.1. Comparisons of PARDS Finally, 116 patients of the 271 had matched the above criteria, and a total of 463 CXRs were performed . The median age of 116 PARDS patients was 25 months (5 months, 60.8 months), 72 boys and 44 girls. The mortality was 37.9% (44/116). Among them, 56.0% (65/116) were infection patients (virus n = 37, bacteria n = 23, and fungus n = 5), and 31.0% (36/116) had pulmonary complications (air-leak syndrome n = 14, pleural effusion n = 18, and alveolar hemorrhage n = 4). Characteristics of 116 patients are given in and . OI score, PEEP, and SpO 2 showed a statistically significant difference in the survival/death and infection/noninfection groups. Pulmonary complications were commonly seen in the death group ( χ 2 = 11.913, p < 0.001). There was no statistically significant difference in age, sex, and intubation time between two groups. 3.2. Validation of RALE Score in PARDS The scores of two observers were compared, the ICCs were excellent (ICC = 0.98, 95% CI: 0.97–0.99), and Bland–Alman plots also showed a better agreement between two independent observers of RALE scores (bias = −0.49, SD of bias = 3.035, 95% CI of limits of agreement: −6.44–5.45) . The RALE score of the survival group declined since day 1, whereas the RALE score of the death group had a peak on day 3 ( t = −6.248, p < 0.001). Compared to day 1, the RALE score of day 3 was independently associated with survival. The ROC showed the area under the curve for predicting was 0.773 ( p < 0.001, 95% CI: 0.709–0.838) . Set the cutoff score at 21, the sensitivity was 71.7%, while the specificity was 76.5%, and hazard ratio (HR) was 9.268 (95% CI: 1.257–68.320). The survival curves showed that RALE score lower than 21 at day 3 had better survival . The pulmonary complication showed an HR of 3.678 ( p < 0.001, 95% CI: 1.174–11.521) for the discrimination. In infection PARDS patients, day 3 RALE score was significant different than that of day 1 ( t = −6.178, p < 0.001) .
Finally, 116 patients of the 271 had matched the above criteria, and a total of 463 CXRs were performed . The median age of 116 PARDS patients was 25 months (5 months, 60.8 months), 72 boys and 44 girls. The mortality was 37.9% (44/116). Among them, 56.0% (65/116) were infection patients (virus n = 37, bacteria n = 23, and fungus n = 5), and 31.0% (36/116) had pulmonary complications (air-leak syndrome n = 14, pleural effusion n = 18, and alveolar hemorrhage n = 4). Characteristics of 116 patients are given in and . OI score, PEEP, and SpO 2 showed a statistically significant difference in the survival/death and infection/noninfection groups. Pulmonary complications were commonly seen in the death group ( χ 2 = 11.913, p < 0.001). There was no statistically significant difference in age, sex, and intubation time between two groups.
The scores of two observers were compared, the ICCs were excellent (ICC = 0.98, 95% CI: 0.97–0.99), and Bland–Alman plots also showed a better agreement between two independent observers of RALE scores (bias = −0.49, SD of bias = 3.035, 95% CI of limits of agreement: −6.44–5.45) . The RALE score of the survival group declined since day 1, whereas the RALE score of the death group had a peak on day 3 ( t = −6.248, p < 0.001). Compared to day 1, the RALE score of day 3 was independently associated with survival. The ROC showed the area under the curve for predicting was 0.773 ( p < 0.001, 95% CI: 0.709–0.838) . Set the cutoff score at 21, the sensitivity was 71.7%, while the specificity was 76.5%, and hazard ratio (HR) was 9.268 (95% CI: 1.257–68.320). The survival curves showed that RALE score lower than 21 at day 3 had better survival . The pulmonary complication showed an HR of 3.678 ( p < 0.001, 95% CI: 1.174–11.521) for the discrimination. In infection PARDS patients, day 3 RALE score was significant different than that of day 1 ( t = −6.178, p < 0.001) .
The main objective of this study was to validate whether the novel chest radiograph scoring method applied in adults for evaluating lung edema was also applicable in pediatric ARDS patients. The CXR RALE score in children was also well correlated with overall disease severity and could predict clinical outcomes. As a marker for clinical prognosis, this practical simple bedside tool reinforces clinical management since it is easy to interpret and assess through the basic clinical imaging modality. The mortality rate of ARDS in adults and severe PARDS is basically the same, and the resources required and costs of care are significant due to the severity . Even though, an efficient quantitative score may allow predict clinical course and help to improve management. Warren and her colleagues established the RALE score to evaluate lung edema, which considered the extent and density to reflect ARDS severity . Although the original intention of RALE was to evaluate lung edema, this pathological change was the key feature in ARDS . According to PALICC diagnostic criteria, pulmonary edema was not fully explained by heat failure or fluid overload . The common methods for pulmonary edema evaluation are either invasive (catheter) or difficult to performance (computed tomographic quantitative imaging). Both methods should concern safety issue. At present, pulmonary ultrasound plays an important role to reduce X-ray exposure especially in infants . Even though CXR remains undisputable, it can demonstrate an overview of pulmonary and cardiovascular condition, which is better than LUS. Both pulmonary and hypoxemia (impaired oxygenation) are reflected on CXR to some extent. Thus, correlating their relations can provide a novel idea for clinical evaluation of disease severity. Recently, Raissaki and her colleagues revised a 5-point scale score for assessing the severity of acute respiratory failure . Beyond that, to our knowledge, this is the first study that used the RALE score based on CXR for PARDS and correlated well with clinical discrimination. In our cohort, the mortality was 37.9%, much higher than the PARDIE study . The reason for the disparity was that the patients who had incomplete clinical or CXR data had just been excluded. The RALE score showed different trends in the death group than in the survival group and represented that the severe PARDS was progressing faster in clinical course . This is because the spectrum of diseases in children is different from that in adults. In this study, 56.0% (65/116) patients were bacterial and virus infection, while cardiopulmonary chronic diseases were commonly combined in the elders and trauma in young adults . Pulmonary complication was a significant risk factor in predicting prognosis, which showed an HR of 3.678 (95% CI: 1.174–11.521) for death, while pleural effusion was more in infection disease and air-leak was common in noninfections. We tried to find the trend of the PARDS course, set day 1 (since intubation) RALE score as baseline, the ROC curve showed a significant difference in day 3 RALE score, and the AUC was 0.773 (95% CI: 0.709–0.838). Combined with 21 points as the cutoff value showed statistical significance ( p < 0.001), the sensitivity was 71.7%, while the specificity was 76.5%, and HR was 9.268 (95% CI: 1.257–68.320). The above indicators can be early warning to the clinician. After day 3, the trend of the RALE score was of great significance to the clinical prognosis. The gradual decrease of the score indicated that the disease was alleviated, and the prognosis would be good. The score continued to rise, indicating that the condition was maintained or worsened. A recent study showed that the interpretation of CXR in PARDS varies between radiologist and physicians . The ICC and Bland–Altman plots in this study showed better agreement; the reason is that the items RALE score chose to evaluate are simple and easy to quantify. Only extent and dense of the infiltration should be noticed, rather than variability of imaging findings. Thus, the RALE score is more practical. Compared to the RALE study in adult, the RALE score in severe patients were basically the same, and it had a good diagnostic performance . There are also come limitations in this study. This was a single-center study with a relatively small sample of children. Due to the exclusion of incomplete clinical and imaging data, the enrolled children were biased. We just focused on the correlation of prognosis and RALE score, did not combine, and compared with other clinical indicators.
RALE score based on CXR can be used in PARDS and has a better agreement among radiologist and pediatrician. Pulmonary complication and day 3 score whether greater than 21 points have a better discriminative effectiveness.
|
Biomechanical evaluation of the modified proximal femoral nail for the treatment of reverse obliquity intertrochanteric fractures | ba9d859c-ba85-4b3b-86d0-efda4135f2c8 | 11763255 | Surgical Procedures, Operative[mh] | Based on the AO/OTA classification guidelines, reverse obliquity intertrochanteric fractures (ROIFs) are classified as the type of AO/OTA 31-A3 . The main fracture line of ROIFs usually passes from the proximal-medial to the distal-lateral . This is different from that of AO/OTA 31-A1 and A2. The proportion of AO/OTA 31-A3 fractures has reached to 5.3-23.5% of all femoral intertrochanteric fractures , . Nowadays, to reduce long-term immobilization and the associated complications, surgical treatment is recommended for most of ROIFs. Yet, the fixation failure rate of ROIFs remains high , . Therefore, how to improve the treatment effects of ROIFs has been a hot issue faced by orthopedic scholars. Several extramedullary implants were used in fixing ROIFs initially, comprising the dynamic hip screw (DHS), the sliding hip screw, the proximal femoral anatomic plate, etc , – . However, patients using the above fixation devices suffered lots of complications – . Moreover, the surgical trauma is relatively large when using extramedullary implants to fix ROIFs, such as the long incision and excessive blood loss. In view of this, scholars have tended to use intramedullary nails to treat such fractures and have achieved good clinical effects , . Intramedullary fixation has some advantages, including the short level arm, the central and minimally invasive fixation, and early weight-bearing. The common intramedullary implants for fixing ROIFs contain PFNA, Gamma3 nail and InterTAN. Notably, the neck screws of these intramedullary nails are approximately parallel to the main fracture line of ROIFs. This anatomical characteristic of ROIFs makes it significantly different from other types of intertrochanteric fractures. It also generates different biomechanical mechanisms. The proximal fracture fragment of ROIFs is prone to sliding downward and outward. The distal fragment has the trend for medial migration while the neck screw is prone to cutting out. The region between the implant junction and the main fracture line forms a stress concentration area. The fixation failure resulting from these factors is not uncommon in ROIF patients . Other scholars used cables to resist this sliding tendency in ROIFs , . Yet, the insertion of cables brings additional soft tissue damage and prolongs operation time. Hence, there is no specially designed intramedullary implants for the treatment of ROIFs currently. Based on these factors, our team proposed the modified proximal femoral nail (MPFN, Fig. A) to treat patients with ROIFs. Two screws are interlocked at the proximal part of the MPFN, including the neck screw and the subtrochanteric screw. The subtrochanteric screw passes through the tail of the neck screw and then the main nail, and is fixed below the lesser trochanter. The interlocking design aims to resist the sliding of the fracture fragment and disperse the local stress. Our team made biomechanical comparisons among the three fixation models (PFNA, InterTAN, and MPFN) for fixing ROIFs via finite element modelling. Finite element analysis (FEA) is a virtual technique which combines computer simulation and digitization. The mechanical properties of new implants could be evaluated via setting boundary conditions and loading loads . Compared with clinical trials and cadaveric experiments, finite element technique possesses several advantages, including low cost, allowing repeated tests, etc. In this research, an AO/OTA 31-A3.1 ROIF model was established via finite element modelling. Three fixation models were evaluated in axial, bending, and torsion load cases. Our team assumed that the MPFN had the best biomechanical properties among the three fixation models in simulating loads.
Construction of the AO/OTA 31-A3.1 ROIF model Written informed consents were achieved from the volunteers, and all methods were conducted in accordance with relevant guidelines and regulations. The experimental protocols have been approved by the ethics committee of Xi’an Honghui Hospital. Twenty sets of intact femoral CT scans were gotten via a blinded and randomized trial. The mean values of CT data were obtained. Then, an intact three-dimensional (3D) femur model was established according to the above data via Mimics software (Materialize Company, Leuven, Belgium). These data were imported into the Studio software (3D system Inc., Rock Hill, SC, USA). The surface of the femur model was smoothed and polished. Based on the values of Hounsfield Unit (HU), cortex and cancellous bones were identified . The boundary value was assumed to be 700 . On the basis of the AO/OTA classification, an osteotomy plane was performed which was set as 60 degrees relative to the sagittal plane above the lesser trochanter. Thus, an AO/OTA 31-A3.1 ROIF model was established , . Construction of three implant models The computer-aided design (CAD) software was used to depict 3D models of three implants (PFNA, InterTAN, and MPFN). Then, these implant models were assembled on the ROIF models. The anteroposterior (AP) and lateral images of the MPFN device are displayed graphically in Fig. A. The dimensions of the MPFN are illustrated as follows. The length of the main nail is 240 mm. The diameters for the proximal and distal parts of the main nail are designed as 17 mm and 10 mm, respectively. The diameters of the neck screw, the subtrochanteric screw, and two distal locking screws are designed as 10 mm, 5 mm, and 5 mm. The neck screw is located at the center of the femoral neck and head. The subtrochanteric screw is located below the lesser trochanter which interlocks with the main nail and the neck screw. The included angle between the subtrochanteric screw and the neck screw is at a right angle while between the main nail and the neck screw is 130 degrees, respectively. Mesh convergence test and model validation The tetrahedral element mesh was used during finite element setting. A convergence test was conducted to evaluate the reliability of these models referring to similar studies . The maximum von Mises stress on bones was used for analyzing mesh convergence. The maximum stress on femur models were compared to five mesh sizes, including 3 mm, 2.5 mm, 2 mm, 1.5 mm, and 1 mm. The results indicated that the values of maximum stress on bones at the 1.5 mm mesh was approximate to those of the 1 mm and 2 mm meshes, and the difference was within 5%. Therefore, the mesh size of this study was defined as 1.5 mm. With maximum Degree of Freedom, field variables, including displacement and strain energy, were also in the range of 5% for both types of elements and there was no maximum stress point. The values of mesh convergence were within 5%, demonstrating the reliability of these models. To perform model validation, our finite element model of the intact femur was compared to previous experimental data . The vertical loads of 2,100 N were loaded onto the femoral head to evaluate axial stiffness. The values of axial stiffness from our finite element computation were 0.52 kN/mm which were within the interval (0.76 ± 0.26 kN/mm) of cadaveric experiments . The results demonstrated that our finite element models were well validated. Finite element settings of boundary conditions and loads Material properties of bones and implants were set as homogeneous, isotropic, and linear elastic referring to previous literature . Elements and nodes were 534,775 and 840,042 for the PFNA fixation model, 552,155 and 872,985 for the InterTAN while 537,984 and 847,212 for the MPFN, respectively. Titanium alloy was endowed with the three implants. The Young’s modulus was assumed to be 16,800 MPa, 840 MPa, and 110,000 MPa for cortex, cancellous bones, and Titanium alloy . Poisson’s ratio was supposed to be 0.3 for cortex and Titanium alloy while 0.2 for cancellous bones, respectively . Frictional contacts were defined between different sections of these fixation models and the frictional coefficient was assumed to be 0.4 . Boundary settings for axial, bending, and torsion loads are shown graphically in Fig. B. With regard to axial loads, the femoral condyle was properly fixed to inhibit extra movement of the whole configuration. In order to simulate axial compression, axial loads of 2,100 N were applied vertically to the femoral head. Under bending boundary conditions, the mid and distal femur was fixed at the same time. 175 N-loads acted laterally onto the femoral head to simulate bending force . As to torsion load case, a torque force of 15 Nm loaded along the femoral neck axis to simulate rotation . Evaluation parameters and percent difference (PD) Maximum stress on implants and bones, maximum displacement of models and fracture surface were tested via FEA under axial, bending, and torsion loads. Since the PFNA device has become one of the most widely applied implants in fixing intertrochanteric fractures, and has obtained relatively good therapeutic effects, it was defined as the control group during data analyzing. The percent difference was calculated through the following formula: PD =(P 1 − Pa)/ P 1 × 100%. Pa denotes the values of the InterTAN or MPFN models while P 1 denotes the value of the PFNA model.
Written informed consents were achieved from the volunteers, and all methods were conducted in accordance with relevant guidelines and regulations. The experimental protocols have been approved by the ethics committee of Xi’an Honghui Hospital. Twenty sets of intact femoral CT scans were gotten via a blinded and randomized trial. The mean values of CT data were obtained. Then, an intact three-dimensional (3D) femur model was established according to the above data via Mimics software (Materialize Company, Leuven, Belgium). These data were imported into the Studio software (3D system Inc., Rock Hill, SC, USA). The surface of the femur model was smoothed and polished. Based on the values of Hounsfield Unit (HU), cortex and cancellous bones were identified . The boundary value was assumed to be 700 . On the basis of the AO/OTA classification, an osteotomy plane was performed which was set as 60 degrees relative to the sagittal plane above the lesser trochanter. Thus, an AO/OTA 31-A3.1 ROIF model was established , .
The computer-aided design (CAD) software was used to depict 3D models of three implants (PFNA, InterTAN, and MPFN). Then, these implant models were assembled on the ROIF models. The anteroposterior (AP) and lateral images of the MPFN device are displayed graphically in Fig. A. The dimensions of the MPFN are illustrated as follows. The length of the main nail is 240 mm. The diameters for the proximal and distal parts of the main nail are designed as 17 mm and 10 mm, respectively. The diameters of the neck screw, the subtrochanteric screw, and two distal locking screws are designed as 10 mm, 5 mm, and 5 mm. The neck screw is located at the center of the femoral neck and head. The subtrochanteric screw is located below the lesser trochanter which interlocks with the main nail and the neck screw. The included angle between the subtrochanteric screw and the neck screw is at a right angle while between the main nail and the neck screw is 130 degrees, respectively.
The tetrahedral element mesh was used during finite element setting. A convergence test was conducted to evaluate the reliability of these models referring to similar studies . The maximum von Mises stress on bones was used for analyzing mesh convergence. The maximum stress on femur models were compared to five mesh sizes, including 3 mm, 2.5 mm, 2 mm, 1.5 mm, and 1 mm. The results indicated that the values of maximum stress on bones at the 1.5 mm mesh was approximate to those of the 1 mm and 2 mm meshes, and the difference was within 5%. Therefore, the mesh size of this study was defined as 1.5 mm. With maximum Degree of Freedom, field variables, including displacement and strain energy, were also in the range of 5% for both types of elements and there was no maximum stress point. The values of mesh convergence were within 5%, demonstrating the reliability of these models. To perform model validation, our finite element model of the intact femur was compared to previous experimental data . The vertical loads of 2,100 N were loaded onto the femoral head to evaluate axial stiffness. The values of axial stiffness from our finite element computation were 0.52 kN/mm which were within the interval (0.76 ± 0.26 kN/mm) of cadaveric experiments . The results demonstrated that our finite element models were well validated.
Material properties of bones and implants were set as homogeneous, isotropic, and linear elastic referring to previous literature . Elements and nodes were 534,775 and 840,042 for the PFNA fixation model, 552,155 and 872,985 for the InterTAN while 537,984 and 847,212 for the MPFN, respectively. Titanium alloy was endowed with the three implants. The Young’s modulus was assumed to be 16,800 MPa, 840 MPa, and 110,000 MPa for cortex, cancellous bones, and Titanium alloy . Poisson’s ratio was supposed to be 0.3 for cortex and Titanium alloy while 0.2 for cancellous bones, respectively . Frictional contacts were defined between different sections of these fixation models and the frictional coefficient was assumed to be 0.4 . Boundary settings for axial, bending, and torsion loads are shown graphically in Fig. B. With regard to axial loads, the femoral condyle was properly fixed to inhibit extra movement of the whole configuration. In order to simulate axial compression, axial loads of 2,100 N were applied vertically to the femoral head. Under bending boundary conditions, the mid and distal femur was fixed at the same time. 175 N-loads acted laterally onto the femoral head to simulate bending force . As to torsion load case, a torque force of 15 Nm loaded along the femoral neck axis to simulate rotation .
Maximum stress on implants and bones, maximum displacement of models and fracture surface were tested via FEA under axial, bending, and torsion loads. Since the PFNA device has become one of the most widely applied implants in fixing intertrochanteric fractures, and has obtained relatively good therapeutic effects, it was defined as the control group during data analyzing. The percent difference was calculated through the following formula: PD =(P 1 − Pa)/ P 1 × 100%. Pa denotes the values of the InterTAN or MPFN models while P 1 denotes the value of the PFNA model.
Maximum stress on implants under three simulating loads The nephograms of maximum stress on implants under axial, bending, and torsion loads are presented in Fig. . The von Mises stress concentration area was the junction between the neck screw and the main nail for the PFNA and InterTAN models. It was concentrated on the junction between the subtrochanteric screw, the neck screw and the main nail for the MPFN model. Under axial loads of 2,100 N, the maximum stress on implants was 241.34 MPa, 259.13 MPa, and 214.55 MPa for the PFNA, InterTAN, and MPFN models, respectively. This value was 64.83 MPa, 58.49 MPa, and 53.98 MPa for these models under bending loads. Besides, it was 53.53 MPa, 59.30 MPa, and 48.04 MPa under torsion loads of 15 Nm for the three fixation models, respectively. The values of maximum stress on implants for the MPFN models were less than those of the PFNA and InterTAN models in three simulating loads. Compared to the PFNA, the reduction of PD for the MPFN was 11.1%, 16.7%, and 10.2% in axial, bending, and torsion load cases, respectively. Maximum stress on bones under three simulating loads The nephograms of maximum stress on femurs under axial, bending, and torsion loads are exhibited in Fig. . The maximum stress on femurs was 174.92 MPa, 125.72 MPa, and 123.94 MPa for the PFNA, InterTAN, and MPFN models in axial load case, respectively. It was 60.20 MPa, 56.35 MPa, and 51.04 MPa for the three fixation models in bending load case. In addition, this index was 61.19 MPa, 27.64 MPa, and 34.46 MPa in torsion load case, respectively. The values of maximum stress on femurs for the MPFN models was less than those of the PFNA and InterTAN models in axial and bending tests. It was also lower for the MPFN model than that of the PFNA model in torsion test. Notably, compared to the PFNA, the PD reduction of this index for the MPFN was 29.1%, 15.2%, and 43.7% under axial, bending, and torsion load conditions, respectively. Maximum displacement under three simulating loads The nephograms of maximum displacement under three simulating loads are presented in Fig. . In axial load experiments, the maximum displacement was 19.35 mm, 18.58 mm, and 16.91 mm for the PFNA, InterTAN, and MPFN models, respectively. This parameter was 0.49 mm, 0.50 mm, and 0.43 mm in bending load experiments while 3.12 mm, 3.37 mm, and 2.55 mm in torsion load conditions. The values of maximum displacement for the MPFN models were smaller than those of the PFNA and InterTAN models in three load cases. The PD reduction of maximum displacement for the MPFN model was 12.6% in axial load case, 10.9% in bending load case, and 18.1% in torsion load case, compared to the PFNA model. Maximum displacement of fracture surface (MDFS) under three simulating loads The nephograms of MDFS in axial, bending and torsion loads are exhibited in Fig. . The MDFS was 13.51 mm, 13.82 mm, and 12.25 mm for the PFNA, InterTAN, and MPFN models in axial load case, respectively. It was 0.10 mm, 0.07 mm, and 0.07 mm in bending load case for the three fixation models. In addition, this index was 2.15 mm, 2.32 mm, and 1.78 mm in torsion load case, respectively. The values of MDFS for the MPFN models were smaller than those of the PFNA and InterTAN models in three load tests. Specially, compared to the PFNA, the PD reduction of this index for the MPFN was 9.3%, 33.4%, and 17.0% under axial, bending, and torsion load conditions, respectively.
The nephograms of maximum stress on implants under axial, bending, and torsion loads are presented in Fig. . The von Mises stress concentration area was the junction between the neck screw and the main nail for the PFNA and InterTAN models. It was concentrated on the junction between the subtrochanteric screw, the neck screw and the main nail for the MPFN model. Under axial loads of 2,100 N, the maximum stress on implants was 241.34 MPa, 259.13 MPa, and 214.55 MPa for the PFNA, InterTAN, and MPFN models, respectively. This value was 64.83 MPa, 58.49 MPa, and 53.98 MPa for these models under bending loads. Besides, it was 53.53 MPa, 59.30 MPa, and 48.04 MPa under torsion loads of 15 Nm for the three fixation models, respectively. The values of maximum stress on implants for the MPFN models were less than those of the PFNA and InterTAN models in three simulating loads. Compared to the PFNA, the reduction of PD for the MPFN was 11.1%, 16.7%, and 10.2% in axial, bending, and torsion load cases, respectively.
The nephograms of maximum stress on femurs under axial, bending, and torsion loads are exhibited in Fig. . The maximum stress on femurs was 174.92 MPa, 125.72 MPa, and 123.94 MPa for the PFNA, InterTAN, and MPFN models in axial load case, respectively. It was 60.20 MPa, 56.35 MPa, and 51.04 MPa for the three fixation models in bending load case. In addition, this index was 61.19 MPa, 27.64 MPa, and 34.46 MPa in torsion load case, respectively. The values of maximum stress on femurs for the MPFN models was less than those of the PFNA and InterTAN models in axial and bending tests. It was also lower for the MPFN model than that of the PFNA model in torsion test. Notably, compared to the PFNA, the PD reduction of this index for the MPFN was 29.1%, 15.2%, and 43.7% under axial, bending, and torsion load conditions, respectively.
The nephograms of maximum displacement under three simulating loads are presented in Fig. . In axial load experiments, the maximum displacement was 19.35 mm, 18.58 mm, and 16.91 mm for the PFNA, InterTAN, and MPFN models, respectively. This parameter was 0.49 mm, 0.50 mm, and 0.43 mm in bending load experiments while 3.12 mm, 3.37 mm, and 2.55 mm in torsion load conditions. The values of maximum displacement for the MPFN models were smaller than those of the PFNA and InterTAN models in three load cases. The PD reduction of maximum displacement for the MPFN model was 12.6% in axial load case, 10.9% in bending load case, and 18.1% in torsion load case, compared to the PFNA model.
The nephograms of MDFS in axial, bending and torsion loads are exhibited in Fig. . The MDFS was 13.51 mm, 13.82 mm, and 12.25 mm for the PFNA, InterTAN, and MPFN models in axial load case, respectively. It was 0.10 mm, 0.07 mm, and 0.07 mm in bending load case for the three fixation models. In addition, this index was 2.15 mm, 2.32 mm, and 1.78 mm in torsion load case, respectively. The values of MDFS for the MPFN models were smaller than those of the PFNA and InterTAN models in three load tests. Specially, compared to the PFNA, the PD reduction of this index for the MPFN was 9.3%, 33.4%, and 17.0% under axial, bending, and torsion load conditions, respectively.
From an overall trend perspective, findings showed that maximum stress of the MPFN models was lower than those of the PFNA and InterTAN models, and maximum displacement was smaller for the MPFN models than those of the PFNA and InterTAN models under axial, bending, and torsion loads. Our results indicated that the MPFN had biomechanical advantages compared to PFNA and InterTAN for the management of ROIFs. The MPFN might be a good strategy for the treatment of ROIF patients. Intramedullary fixation is recommended by most scholars for the treatment of unstable femoral trochanteric fractures , . This central fixation method typically allows patients with ROIFs to perform partial or complete weight-bearing early on, thereby reducing the incidence of bed-rest related complications . Previous literature have demonstrated that stress applying to the femoral head surface could be as high as two to three times of one’s weight in walking . Hence, the axial loads of 2,100 N were simulated via finite element method in this study. The unique design of the MPFN may be the reason for its superior axial stiffness compared to PFNA and InterTAN. The interlocking between the neck screw and the subtrochanteric screw restricts the sliding of the proximal fracture fragment in ROIFs. Simultaneously, the subtrochanteric screw disperses the stress at the junction between the main nail and the neck screw. These factors make the MPFN less prone to implant failure compared to PFNA and InterTAN. Currently, there is few literature focusing on the antirotation of implants in patients with ROIFs. Based on our data, compared to the PFNA and InterTAN models, the MPFN model showed better antirotation performance in fixing ROIFs. The improvement of the MPFN’s antirotation may be due to its subtle structure. A small triangular and rigid structure is formed between the main nail, the neck screw, and the subtrochanteric screw. In addition, a big triangular stable structure is shaped between the neck screw, the subtrochanteric screw, and the medial wall of the proximal femur. This conforms to the principle of triangular stable structure proposed by Zhang et al. . Proximal Femur Bionic Nail (PFBN) is a new type of implant designed based on triangular stability theory for fixing intertrochanteric fractures. Biomechanical studies have shown that compared to PFNA and InterTAN, the PFBN had better mechanical properties in fixing AO/OTA 31-A1.3 and 31-A3 fractures , . The two neck screws of PFBN are located at the proximal fragment above the main fracture line of ROIFs, so this design may not provide good antirotation for ROIFs. The neck screw and the subtrochanteric screw of the MPFN device are located on both sides of the fracture line of ROIFs. These two screws are locked in a right angle, ensuring good resistance to rotation. As shown in our results, this interlocking structure of the MPFN also provided better anti-compression and anti-bending properties, compared to the PFNA and InterTAN devices. Several scholars emphasized the importance of medial support for intertrochanteric fractures – . Chen et al.’s study indicated that the reduction loss incidence after surgery due to comminuted medial wall was approximately 20% in patients with trochanteric fractures . Song et al.’s study demonstrated that the comminuted medial wall was a relatively reliable parameter to predict implant failure after intramedullary fixation for femoral intertrochanteric fractures . Nie et al. conducted biomechanical experiments and demonstrated that the medial wall of proximal femur was more important than the lateral wall for patients with trochanteric fractures . A medial support nail-II (MSN-II) with the triangular stability structure was developed for the treatment of ROIFs . Nie et al.’s research showed that compared with PFNA-II, the MSN-II exhibited better mechanical stability for fixing ROIFs under increasing axial loads via finite element analysis . Although the MSN-II consists of two neck screws, the two screws are almost parallel to the fracture line of ROIFs. This limits its overall antisliding capacity. The design of the MPFN enables it to provide good medial support through the subtrochanteric screw and effectively resist sliding via the right angle locking structure. This study has some limitations. Ligaments, muscles, and tendons were not considered during finite element modelling. This is a common phenomenon in similar studies on the biomechanical properties of new orthopedic implants , . Notably, the interaction between new implants and bones was the focus of this study. Therefore, ignoring the influence of soft tissues was reasonable to some extent. What’s more, the force applying to the femoral head is complex in reality, but we simplified it into axial compression, bending, and torsion loads. In further study, we will try to simulate the loads during walking and motion. Besides, we assigned the femur model as a homogeneous material property. The femur of patients is actually a heterogeneous structure. Yet, the current digital simulation technique is still unable to fully simulate heterogeneous materials via finite element modelling. Finally, cadaveric experiments and clinical studies will be conducted to further validate our current conclusions.
The modified proximal femoral nail presented the best biomechanical performance, followed by the InterTAN nail, and the PFNA for fixing reverse obliquity intertrochanteric fractures. The MPFN has the potential to be a promising device for patients with ROIFs.
|
Impferinnerungen in Deutschland: Bestandsaufnahme und Ideen für morgen am Beispiel der HPV-Impfung | 8bab6758-fefc-45bd-807e-5cbb980e2ed5 | 11950045 | Pathologic Processes[mh] | Impfungen gehören zu den wichtigsten und effektivsten gesundheitlichen Präventionsmaßnahmen. Empfehlungen für Impfungen werden in Deutschland von der Ständigen Impfkommission (STIKO) ausgesprochen . Das Impfangebot und die Durchführung erfolgen in Deutschland hauptsächlich in Arztpraxen im Rahmen eines opportunistischen Impfsystems . Ein opportunistisches Impfsystem zeichnet sich dadurch aus, dass entweder medizinisches Personal bei passender Gelegenheit ein Impfangebot macht oder die zu impfende Person die Impfung in der Praxis aktiv nachfragt. Eine Gelegenheit für Impfangebote sind z. B. Gesundheits- bzw. Check-up-Untersuchungen . In Deutschland sind vom Säuglings- bis zum Vorschulalter mehr als 9 Gesundheitsuntersuchungen in der kinderärztlichen oder hausärztlichen Versorgung vorgesehen. Ab Schuleintritt bis zum 18. Geburtstag sind es noch 3 Untersuchungen, von denen nur eine von allen Krankenkassen als Teil des Leistungskataloges bezahlt werden muss. Im Erwachsenenalter können betriebsmedizinische Untersuchungen sowie verschiedene, von den Krankenkassen als Regelversorgungsleistung übernommene Check-up-Untersuchungen in der Arztpraxis zur Impfbuchkontrolle und für mögliche Impfangebote genutzt werden . Eine von der STIKO im Kindes- bzw. Jugendalter empfohlene Impfung ist die Impfung gegen humane Papillomaviren (HPV; ). Jährlich erkranken in Deutschland fast 8000 Menschen an HPV-bedingten Tumoren, die neben der häufigsten Lokalisation am Gebärmutterhals auch bei beiden Geschlechtern im Mund-Rachen-Raum und Anogenitalbereich auftreten können . Die HPV-Impfempfehlung der STIKO gilt standardmäßig für alle Kinder und Jugendlichen von 9 bis 14 Jahren, eine Nachholimpfung ist bis zum 18. Geburtstag möglich. Als Gesundheitsuntersuchungen fallen in diesen Zeitraum die von allen Kassen als Regelversorgungsleistung zu übernehmende J1-Untersuchung (12–14 Jahre) und die freiwillig von einigen Krankenkassen übernommene U11 (9–10 Jahre; ). Obwohl die HPV-Impfung sehr wirksam vor HPV-bedingten Tumoren schützt, lagen die Impfquoten in Deutschland für eine vollständige HPV-Impfserie im Jahr 2023 bei den 15-jährigen Mädchen lediglich bei 55 % bzw. bei den gleichaltrigen Jungen bei 34 % . Im europäischen Ländervergleich bewegte sich Deutschland damit im unteren Drittel . Aufgrund des hohen Präventionspotenzials besteht sowohl vonseiten der Weltgesundheitsorganisation (WHO) als auch von der Kommission der Europäischen Union (EU) das Ziel, bis zum Jahr 2030 bei 15-jährigen Mädchen eine HPV-Impfquote von mindestens 90 % zu erreichen [ – ] und die Impfquote bei den gleichaltrigen Jungen deutlich zu steigern . Im Gegensatz zu Deutschland nutzt die überwiegende Mehrheit der Länder in Europa strukturierte Impfsysteme . In strukturierten Impfsystemen wird systematisch allen Personen innerhalb der Zielgruppe aktiv ein Impfangebot gemacht. Häufig beinhaltet ein strukturiertes Impfsystem auch Impferinnerungen der Zielgruppe . Verschiedene Studien zeigen, dass Erinnerungssysteme einen positiven Effekt auf Impfquoten haben [ – ]. In Deutschland werden solche Systeme jedoch bisher nicht flächendeckend genutzt. Daher beschäftigte sich die „Interventionsstudie zur Steigerung der HPV-Impfquoten in Deutschland“ (InveSt HPV) in einem von 2 Projektmodulen mit potenziellen Hürden für den Einsatz bzw. die Verbreitung von Einladungs- und Impferinnerungssystemen. Im Rahmen des Projektmoduls wurden 2 bundesweite quantitative Befragungen von (i) niedergelassenen, kinderärztlich tätigen Ärzt:innen sowie (ii) Eltern mit Kindern im Alter von 9 bis 14 Jahren durchgeführt. Ergänzt wurden die bundesweiten Befragungen durch eine Bestandsaufnahme bei gesetzlichen Krankenkassen. Durch die Befragungen soll die Nutzung von Impferinnerungssystemen aus verschiedenen Perspektiven untersucht werden: die Eltern als Empfänger:innen und die Pädiater:innen sowie Krankenkassen als potenzielle Absender:innen der Impferinnerungen. Schließlich wurde ein Workshop mit impfrelevanten Akteur:innen aus der Gesundheitsversorgung durchgeführt. Auf Basis der im Projektmodul zusammengetragenen Evidenz wurde gemeinsam am konkreten Beispiel der HPV-Impfung an möglichen Konzepten für ein zukünftiges Einladungs- und Impferinnerungssystem in Deutschland gearbeitet. Im vorliegenden Bericht werden Kernergebnisse der Befragungen bei Pädiater:innen und Eltern sowie der Bestandsaufnahme bei den gesetzlichen Krankenkassen berichtet. Weitere, für den Workshop zusammengetragene Evidenz wird auszugsweise vorgestellt. Zum Schluss werden die im Workshop erarbeiteten Kernelelemente für ein Einladungs- und Impferinnerungssystem 2.0 in Deutschland in Kurzform dargelegt.
Von August bis November 2023 beteiligten sich 345 von 6635 im Berufsverband der Kinder- und Jugendärzt*innen (BVKJ) organisierten niedergelassenen Pädiater:innen an dem Online-Survey. Im gleichen Zeitraum wurden auch 1805 Eltern mit mindestens einem 9‑ bis 14-jährigen Kind aus ganz Deutschland mittels Online-Survey befragt. Die Eltern wurden unter Berücksichtigung von Quoten nach Bildungsstand und nach Geschlecht des Kindes (50 % Mädchen, 50 % Jungen) rekrutiert. Die Befragung der gesetzlichen Krankenkassen erfolgte von Oktober bis November 2023 (vor dem Inkrafttreten des Gesundheitsdatennutzungsgesetzes (GDNG) im März 2024 ) mittels eines standardisierten elektronischen Fragebogens in Form eines ausfüllbaren PDF-Dokuments. 46 Krankenkassen (von z. Zt. 95 ), die insgesamt etwa 51 Mio. Versicherte abdecken, nahmen an der Befragung teil. Nachfolgend finden sich die wichtigsten Ergebnisse der Befragungen. Methodik, Beschreibung der Studienpopulationen und weitere Ergebnisse finden sich im ausführlichen Projektbericht . Pädiater:innen Ziel der Befragung der Pädiater:innen war es, deren Nutzung von Erinnerungssystemen für die HPV-Impfung sowie mögliche Hürden und Anreize dafür zu untersuchen. Die Kontrolle des Impfstatus erfolgte am häufigsten durch die Pädiater:innen selbst anhand der Patientenakte (27 % von N = 1002 Nennungen) oder des vorgelegten Impfausweises (21 %). Eine Kontrolle mittels Praxisverwaltungssoftware (PVS) beim Praxisbesuch wurde vergleichsweise selten (10 %) genannt. Anlässe für die Impfstatuskontrolle durch Pädiater:innen waren vor allem U‑ oder J‑Untersuchung und allgemein ein Praxisbesuch (47 % bzw. 38 % von 616 Nennungen). Zur Erinnerung an die HPV-Impfung nutzten Pädiater:innen mit 68 % am häufigsten das persönliche Gespräch in der Praxis, 11 % nutzten Erinnerungszettel (z. B. als Notizzettel am Impfpass) und 9 % eine App. Fast 75 % der Praxen, die primär schriftlich an die HPV-Impfung erinnerten, boten dies nur in deutscher Sprache an. 35 % der Pädiater:innen gaben an, in der Praxis ein softwaregestütztes Erinnerungssystem (SGE) zu nutzen. Ein Zusammenhang zwischen der Nutzung eines SGE und dem Praxisstandort (Ost/West; städtisch/ländlich), der personellen Praxisausstattung oder dem Alter der/des Praxisinhabenden wurde nicht beobachtet. Als häufigste Gründe für eine Nichtnutzung wurden die Auslastung der Praxis, der finanzielle Aufwand und der Zeitaufwand für den Versand von Erinnerungen genannt. Praxen, die Erinnerungssysteme nutzten, setzten diese für verschiedene Leistungen ein: Am häufigsten wurden Früherkennungs- und Vorsorgeuntersuchungen (24 % bzw. 23 % von 388 Nennungen), STIKO-Standardimpfungen sowie die HPV-Impfung (jeweils 21 %) genannt. Von den SGE-Nutzer:innen gaben 60 % an, dass die Kontaktaufnahme zur HPV-Impferinnerung mit Patient:innen nichtautomatisiert durch das SGE, sondern durch das Praxispersonal erfolgt. Knapp die Hälfte der SGE-Nutzer:innen (48 %) erinnerten ihre Patient:innen einmalig an die HPV-Impfung, 35 % bei nicht erfolgter Impfung mehrmalig und 18 % mehrmalig unabhängig von einer bereits erfolgten Impfung. Die primäre Verantwortung für die Erinnerung an die HPV-Impfung sahen 58 % der Pädiater:innen bei den pädiatrischen Praxen und 28 % bei der Krankenkasse , gefolgt von Gesundheitsamt (5 %), Landesgesundheitsbehörde (4 %) und Schule (2 %). Eltern mit Kindern zwischen 9 und 14 Jahren In der Befragung von Eltern wurden ihre Erfahrung mit Impferinnerungssystemen sowie ihre Wünsche an Erinnerungen zur HPV-Impfung erhoben. Unabhängig vom Geschlecht ihres Kindes wurden 47 % der befragten Elternteile schon einmal an die HPV-Impfung erinnert, dies geschah signifikant häufiger bei Eltern mit einem hohen sozioökonomischen Status (SES), in städtischen Regionen oder bei Privatversicherten. Kinder, deren Eltern an die HPV-Impfung erinnert wurden, waren signifikant häufiger gegen HPV geimpft (70 % mind. einmal geimpft) als ohne Erinnerung (44 %, p < 0,001). Am häufigsten wurden Eltern von der versorgenden Arztpraxis (76 % von 1260 Nennungen) oder der Krankenkasse (11 %) an die HPV-Impfung erinnert. Welche Kommunikationsform Arztpraxen und Krankenkassen dafür nutzten, ist in Abb. und dargestellt. Den Wunsch nach einer Impferinnerung äußerten 70 % der befragten Elternteile. Der Großteil bevorzugte eine Erinnerung, wenn ihr Kind im empfohlenen Impfalter ist (75 % vs. 21 % vor dem empfohlenen Alter). Der Erinnerungswunsch von Eltern bezog sich am häufigsten auf die versorgende Arztpraxis (57 % von 2719 Nennungen), gefolgt von der Krankenkasse (20 %). Am häufigsten wünschten sich die befragten Eltern die Erinnerung per Post (24 % von 3153 Nennungen), per E‑Mail (22 %) oder im persönlichen Gespräch in der Praxis (13 %). Eine personalisierte Erinnerung war 68 % der Eltern wichtig oder sehr wichtig. Auf die Frage, wer aus Elternsicht dafür verantwortlich sei, empfohlene Impfungen für ihr Kind im Blick zu behalten (Skala von 1 „gar nicht“ bis 5 „sehr stark“ verantwortlich), sahen Eltern die Verantwortung am häufigsten bei sich selbst (M = 4,2), gefolgt von der Arztpraxis (M = 3,8) und dem Gesundheitsamt (M = 3,2). Krankenkassen wurden in der Bewertung nicht abgefragt. Krankenkassen Die Befragung der gesetzlichen Krankenkassen beschränkte sich auf den Einsatz von Erinnerungssystemen für ihre Versicherten, Hürden oder Wünsche wurden nicht untersucht. Von den befragten gesetzlichen Krankenkassen nutzten 9 % keine Einladungs- oder Erinnerungssysteme. 91 % hatten diese Systeme für ihre Versicherten etabliert und wurden dazu näher befragt. Am häufigsten gaben die Vertreter:innen der Krankenkassen mit Einladungs- oder Erinnerungssystemen an, dass diese für Früherkennungsuntersuchungen genutzt werden (55 % der 71 Nennungen), gefolgt von der Nutzung für empfohlene STIKO-Standardimpfungen (18 %). Nur eine Krankenkasse evaluierte, ob die Einladung/Erinnerung zu einer Inanspruchnahme der Leistung geführt hat. 37 % gaben an, ihre Versicherten auch an die HPV-Impfung zu erinnern. Am häufigsten nutzten Krankenkassen dafür ihre Mitgliedszeitschrift/ihren Newsletter (32 % von 25 Nennungen), die krankenkasseneigene App (20 %), Post (20 %) oder E‑Mail (8 %). 43,5 % der befragten Krankenkassenvertreter:innen fänden es sinnvoll, den Impfstatus bei einer Einladung/Erinnerung zu berücksichtigen. Der Großteil (86 %) der Krankenkassen, die an die HPV-Impfung erinnerten, gab allerdings an, den Impfstatus ihrer Versicherten nicht ermitteln zu können. Zusammenfassung der Befragungsergebnisse Kinder in Deutschland, deren Eltern an die HPV-Impfung erinnert wurden, waren signifikant häufiger gegen HPV geimpft. Am häufigsten erfolgte die HPV-Impferinnerung durch das persönliche Gespräch in der versorgenden Arztpraxis. Eltern dagegen wünschen sich eher schriftliche Erinnerungen. Die einer Erinnerung vorausgehende erforderliche Impfstatuskontrolle wurde von Pädiater:innen zumeist analog anhand der Akte oder des mitgebrachten Impfpasses durchgeführt. In den meisten Fällen war die Impfstatuskontrolle an eine U‑/J-Untersuchung gekoppelt. Softwaregestützte Erinnerungssysteme werden in Deutschland nur von einem Drittel der Pädiater:innen genutzt und sind damit bisher wenig verbreitet. Auch bei Pädiater:innen mit SGE erfolgte die Impferinnerung mehrheitlich und trotz hohen Zeitaufwands nicht durch automatisierte Prozesse, sondern durch das Praxispersonal. Auch wenn > 90 % der gesetzlichen Krankenkassen Einladungs‑/Erinnerungssysteme nutzten, entschied sich nur eine dieser Krankenkassen für eine Evaluierung, ob die Einladung/Erinnerung zu einer Inanspruchnahme der Leistung geführt hatte. Von den 40 % der Krankenkassen, die an HPV erinnern, gaben > 85 % an, den Impfstatus ihrer Versicherten nicht ermitteln können.
Ziel der Befragung der Pädiater:innen war es, deren Nutzung von Erinnerungssystemen für die HPV-Impfung sowie mögliche Hürden und Anreize dafür zu untersuchen. Die Kontrolle des Impfstatus erfolgte am häufigsten durch die Pädiater:innen selbst anhand der Patientenakte (27 % von N = 1002 Nennungen) oder des vorgelegten Impfausweises (21 %). Eine Kontrolle mittels Praxisverwaltungssoftware (PVS) beim Praxisbesuch wurde vergleichsweise selten (10 %) genannt. Anlässe für die Impfstatuskontrolle durch Pädiater:innen waren vor allem U‑ oder J‑Untersuchung und allgemein ein Praxisbesuch (47 % bzw. 38 % von 616 Nennungen). Zur Erinnerung an die HPV-Impfung nutzten Pädiater:innen mit 68 % am häufigsten das persönliche Gespräch in der Praxis, 11 % nutzten Erinnerungszettel (z. B. als Notizzettel am Impfpass) und 9 % eine App. Fast 75 % der Praxen, die primär schriftlich an die HPV-Impfung erinnerten, boten dies nur in deutscher Sprache an. 35 % der Pädiater:innen gaben an, in der Praxis ein softwaregestütztes Erinnerungssystem (SGE) zu nutzen. Ein Zusammenhang zwischen der Nutzung eines SGE und dem Praxisstandort (Ost/West; städtisch/ländlich), der personellen Praxisausstattung oder dem Alter der/des Praxisinhabenden wurde nicht beobachtet. Als häufigste Gründe für eine Nichtnutzung wurden die Auslastung der Praxis, der finanzielle Aufwand und der Zeitaufwand für den Versand von Erinnerungen genannt. Praxen, die Erinnerungssysteme nutzten, setzten diese für verschiedene Leistungen ein: Am häufigsten wurden Früherkennungs- und Vorsorgeuntersuchungen (24 % bzw. 23 % von 388 Nennungen), STIKO-Standardimpfungen sowie die HPV-Impfung (jeweils 21 %) genannt. Von den SGE-Nutzer:innen gaben 60 % an, dass die Kontaktaufnahme zur HPV-Impferinnerung mit Patient:innen nichtautomatisiert durch das SGE, sondern durch das Praxispersonal erfolgt. Knapp die Hälfte der SGE-Nutzer:innen (48 %) erinnerten ihre Patient:innen einmalig an die HPV-Impfung, 35 % bei nicht erfolgter Impfung mehrmalig und 18 % mehrmalig unabhängig von einer bereits erfolgten Impfung. Die primäre Verantwortung für die Erinnerung an die HPV-Impfung sahen 58 % der Pädiater:innen bei den pädiatrischen Praxen und 28 % bei der Krankenkasse , gefolgt von Gesundheitsamt (5 %), Landesgesundheitsbehörde (4 %) und Schule (2 %).
In der Befragung von Eltern wurden ihre Erfahrung mit Impferinnerungssystemen sowie ihre Wünsche an Erinnerungen zur HPV-Impfung erhoben. Unabhängig vom Geschlecht ihres Kindes wurden 47 % der befragten Elternteile schon einmal an die HPV-Impfung erinnert, dies geschah signifikant häufiger bei Eltern mit einem hohen sozioökonomischen Status (SES), in städtischen Regionen oder bei Privatversicherten. Kinder, deren Eltern an die HPV-Impfung erinnert wurden, waren signifikant häufiger gegen HPV geimpft (70 % mind. einmal geimpft) als ohne Erinnerung (44 %, p < 0,001). Am häufigsten wurden Eltern von der versorgenden Arztpraxis (76 % von 1260 Nennungen) oder der Krankenkasse (11 %) an die HPV-Impfung erinnert. Welche Kommunikationsform Arztpraxen und Krankenkassen dafür nutzten, ist in Abb. und dargestellt. Den Wunsch nach einer Impferinnerung äußerten 70 % der befragten Elternteile. Der Großteil bevorzugte eine Erinnerung, wenn ihr Kind im empfohlenen Impfalter ist (75 % vs. 21 % vor dem empfohlenen Alter). Der Erinnerungswunsch von Eltern bezog sich am häufigsten auf die versorgende Arztpraxis (57 % von 2719 Nennungen), gefolgt von der Krankenkasse (20 %). Am häufigsten wünschten sich die befragten Eltern die Erinnerung per Post (24 % von 3153 Nennungen), per E‑Mail (22 %) oder im persönlichen Gespräch in der Praxis (13 %). Eine personalisierte Erinnerung war 68 % der Eltern wichtig oder sehr wichtig. Auf die Frage, wer aus Elternsicht dafür verantwortlich sei, empfohlene Impfungen für ihr Kind im Blick zu behalten (Skala von 1 „gar nicht“ bis 5 „sehr stark“ verantwortlich), sahen Eltern die Verantwortung am häufigsten bei sich selbst (M = 4,2), gefolgt von der Arztpraxis (M = 3,8) und dem Gesundheitsamt (M = 3,2). Krankenkassen wurden in der Bewertung nicht abgefragt.
Die Befragung der gesetzlichen Krankenkassen beschränkte sich auf den Einsatz von Erinnerungssystemen für ihre Versicherten, Hürden oder Wünsche wurden nicht untersucht. Von den befragten gesetzlichen Krankenkassen nutzten 9 % keine Einladungs- oder Erinnerungssysteme. 91 % hatten diese Systeme für ihre Versicherten etabliert und wurden dazu näher befragt. Am häufigsten gaben die Vertreter:innen der Krankenkassen mit Einladungs- oder Erinnerungssystemen an, dass diese für Früherkennungsuntersuchungen genutzt werden (55 % der 71 Nennungen), gefolgt von der Nutzung für empfohlene STIKO-Standardimpfungen (18 %). Nur eine Krankenkasse evaluierte, ob die Einladung/Erinnerung zu einer Inanspruchnahme der Leistung geführt hat. 37 % gaben an, ihre Versicherten auch an die HPV-Impfung zu erinnern. Am häufigsten nutzten Krankenkassen dafür ihre Mitgliedszeitschrift/ihren Newsletter (32 % von 25 Nennungen), die krankenkasseneigene App (20 %), Post (20 %) oder E‑Mail (8 %). 43,5 % der befragten Krankenkassenvertreter:innen fänden es sinnvoll, den Impfstatus bei einer Einladung/Erinnerung zu berücksichtigen. Der Großteil (86 %) der Krankenkassen, die an die HPV-Impfung erinnerten, gab allerdings an, den Impfstatus ihrer Versicherten nicht ermitteln zu können.
Kinder in Deutschland, deren Eltern an die HPV-Impfung erinnert wurden, waren signifikant häufiger gegen HPV geimpft. Am häufigsten erfolgte die HPV-Impferinnerung durch das persönliche Gespräch in der versorgenden Arztpraxis. Eltern dagegen wünschen sich eher schriftliche Erinnerungen. Die einer Erinnerung vorausgehende erforderliche Impfstatuskontrolle wurde von Pädiater:innen zumeist analog anhand der Akte oder des mitgebrachten Impfpasses durchgeführt. In den meisten Fällen war die Impfstatuskontrolle an eine U‑/J-Untersuchung gekoppelt. Softwaregestützte Erinnerungssysteme werden in Deutschland nur von einem Drittel der Pädiater:innen genutzt und sind damit bisher wenig verbreitet. Auch bei Pädiater:innen mit SGE erfolgte die Impferinnerung mehrheitlich und trotz hohen Zeitaufwands nicht durch automatisierte Prozesse, sondern durch das Praxispersonal. Auch wenn > 90 % der gesetzlichen Krankenkassen Einladungs‑/Erinnerungssysteme nutzten, entschied sich nur eine dieser Krankenkassen für eine Evaluierung, ob die Einladung/Erinnerung zu einer Inanspruchnahme der Leistung geführt hatte. Von den 40 % der Krankenkassen, die an HPV erinnern, gaben > 85 % an, den Impfstatus ihrer Versicherten nicht ermitteln können.
Evidenzgrundlage In Vorbereitung auf den Workshop wurde – neben den Befragungen – vom Projektteam weitere Evidenz zusammengetragen, um das Thema Einladungs- und Impferinnerungssysteme aus unterschiedlichen Blickwinkeln zu beleuchten. Den teilnehmenden Akteur:innen des Workshops sollten die Evidenzimpulse als Grundlage für eine informierte Diskussion dienen. Alle Impulse sind auf der Projektwebseite veröffentlicht . Nachfolgend werden einige Inhalte der Impulse näher beschrieben, die aus Sicht der Autor:innen zentral sind, um über mögliche Konzepte für ein zukünftiges Einladungs- und Impferinnerungssystem in Deutschland zu diskutieren. Etablierte Einladungs- und Erinnerungssysteme im Kinder- und Jugendalter in Deutschland Auch wenn es in Deutschland kein strukturiertes Impfsystem gibt, findet sich in den meisten Bundesländern ein strukturiertes, gesetzlich geregeltes und verbindliches Einladungswesen für die Gesundheits- oder Früherkennungsuntersuchungen im Kindesalter (U1–U9; ). Unabhängig hiervon erfolgen Einladungen ggf. zusätzlich über die versorgende Arztpraxis oder jeweilige gesetzliche Krankenkasse bzw. private Krankenversicherung. Daten der letzten KiGGS-Welle 2 (2014–2017) zeigten, dass 97,2 % der Kinder die U3 bis U9 (ohne U7a) vollständig in Anspruch genommen hatten. Dieser Anteil lag unabhängig vom Geschlecht (Mädchen/Jungen), SES (niedrig/mittel/hoch) und Migrationshintergrund (ohne/einseitig/beidseitig) bei > 90 % . Gleichzeitig fanden sich in den letzten Jahren konstant hohe Impfquoten (vollständige Impfserie) für die von der STIKO empfohlenen Standardimpfungen zum Zeitpunkt der Schuleingangsuntersuchung von > 80 % (z. B. Varizellen, Pneumokokken, Hepatitis B) bzw. > 90 % (Masern-Mumps-Röteln, Tetanus-Diphtherie-Pertussis-Polio-Hib, Meningokokken C; ). Die Schuleingangsuntersuchung im Alter von 5–6 Jahren folgt zeitlich kurz nach der U9 (60.–64. Lebensmonat; ). Im Gegensatz zu den U‑Untersuchungen besteht für die J1 im Alter von 12–14 Jahren kein vergleichbares Einladungswesen . Letzte Analysen der Impfsurveillance aus Daten der kassenärztlichen Vereinigungen (KVen) zeigten J1-Teilnahmeraten zwischen 41 % und 46 % für die Geburtskohorten 2004–2007 (unveröffentlichte Daten, ). Die Teilnahmeraten waren vergleichbar für Jungen und Mädchen. Gleichzeitig zeigten 2 Studien aus Deutschland eine Assoziation von Praxiskontakt und HPV-Impfinanspruchnahme: Mädchen mit J1-Teilnahme im Alter von 12 Jahren hatten eine 7‑fach höhere Wahrscheinlichkeit, eine HPV-Impfung erhalten zu haben, als ohne J1-Teilnahme . Mädchen mit U11-Teilnahme (9–10 Jahre) hatten eine deutlich höhere Chance, eine HPV-Impfung bekommen zu haben, als diejenigen, die die U11 nicht in Anspruch genommen hatten . Die U11 ist aktuell kein Teil des Leistungskataloges der gesetzlichen Krankenkassen, jedoch prüft der Gemeinsame Bundesausschuss (G-BA) derzeit die Einführung einer zusätzlichen Früherkennungsuntersuchung für 9‑ bis 10-Jährige („neue U10“; ). Impfen digital Die Umsetzung von Impfempfehlungen kann durch verschiedene digitale Lösungen unterstützt werden, die gleichzeitig auch Impferinnerungen ermöglichen. Nachfolgend werden 2 digitale Lösungsansätze, die elektronische Patientenakte (ePA) und die Praxis-App „Meine pädiatrische Praxis“, exemplarisch vorgestellt, weitere Ansätze können z. B. das elektronische Impfmanagement in der Praxis sein . Die elektronische Patientenakte (ePA) ist ein digitaler Speicherplatz für medizinische Unterlagen, den die Versicherten mithilfe von Apps ihrer gesetzlichen Krankenkasse einsehen und verwalten können. Die Apps haben zum Zeitpunkt, als das Manuskript verfasst wurde, identische Grundfunktionen, aber unterschiedliche Zusatzfunktionen. Initial enthält die ePA Medikationspläne, Arztbriefe, stationäre Entlassungsbriefe und Befundberichte in Form von medizinischen Informationsobjekten (MIO; ). Die ePA soll schrittweise erweitert werden, sodass perspektivisch auch Daten zu Impfungen im Rahmen eines elektronischen Impfpasses (eImpfpass) digital verfügbar gemacht werden . Um die ePA nutzen zu können, ist von ärztlicher Seite eine Anbindung an die Telematikinfrastruktur notwendig, jedoch berichten etwa 2/3 von wöchentlichen oder täglichen Problemen mit ebendieser . Vonseiten der Patient:innen werden eine „GesundheitsID“ und ein Zugang zu digitaler Infrastruktur benötigt. Seit dem 15.01.2025 ist die „ePA für alle“ nach Opt-out-Konzept vorgesehen. Versicherte erhalten automatisch eine ePA von ihrer gesetzlichen Krankenkasse und in der Gesundheitsversorgung kann darauf ohne erneute Freigabe der Versicherten zugegriffen werden – außer sie widersprechen aktiv. Der Bund hat sich als Ziel gesetzt, dass 80 % der gesetzlich Versicherten die ePA nutzen . Umsetzungszeitplan und in Teilen auch die konkrete Ausgestaltung, wie z. B. eine mögliche Erinnerungsfunktion, sind jedoch aktuell noch ungeklärt. Die Praxis-App Meine pädiatrische Praxis (vormals „Mein Kinder- und Jugendarzt“) wurde von einem privaten Anbieter in Zusammenarbeit mit dem BVKJ entwickelt und wird mittlerweile von > 1200 Praxen bzw. etwa 40 % aller Kinder- und Jugendärzt:innen genutzt (persönliche Kommunikation, BVKJ). Praxen können sich gegen Gebühr für die App-Nutzung registrieren, müssen die App aber selbst aktiv bespielen. Die App bietet verschiedene Funktionen: Unter anderem können Eltern an Termine, Vorsorgeuntersuchungen und Impfungen erinnert werden. Für Impfungen können sogenannte Impfsteckbriefe mit Erinnerung versandt werden. Eltern können die App nur nutzen, wenn die versorgende Praxis registriert ist, und müssen die Daten zum Kind selbst eingeben. Impferinnerungen finden auf Grundlage des von den Eltern eingegebenen Alters statt. Personalisierte Impferinnerungen (d. h. in Abhängigkeit vom tatsächlichen Impfstatus) können (bisher) nicht versandt werden, da es derzeit keine Schnittstelle zur Praxisverwaltungssoftware gibt . Einladungssystem für alle: Wer mit welchen Daten wen erreichen kann Für das Konzept eines strukturierten Einladungs- und Impferinnerungssystems sind 2 Fragen zentral: Wer kann mit welchen Daten wen erreichen? Mit welchen Daten kann eine Inanspruchnahme der Leistung evaluiert werden? Diese Fragen sind v. a. im Hinblick auf Zugangsgerechtigkeit (Equity) zentral . Die Abb. a–c zeigen die mögliche Reichweite von Einladungen zur HPV-Impfung und einer Evaluation der Inanspruchnahme für die 3 wichtigsten Akteure Öffentlicher Gesundheitsdienst (ÖGD), Praxen und Krankenkassen (bzw. -versicherungen; ). Öffentlicher Gesundheitsdienst. Für eine postalische Kontaktaufnahme stehen dem ÖGD Daten aus den Einwohnermeldeämtern zur Verfügung (Abb. a). Diese könnten auch Daten zur Nationalität der zu kontaktierenden Person enthalten, die einen wichtigen Hinweis auf (weitere) gesprochene Sprachen und einen möglichen Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial geben könnten. Auf Daten, die eine Evaluation der Inanspruchnahme ermöglichen würden, hat der ÖGD keinen Zugriff. Ärztliche Praxen. Einladungen zur HPV-Impfung durch ärztliche Praxen (Abb. b) setzen aktuelle Kontaktdaten und Einwilligungserklärungen voraus, die am ehesten bei einer aktiven Arzt-Patient-Beziehung vorhanden sind (dunkelgrüne Kreise). Gegebenenfalls lassen sich auch Patient:innen mit inaktiver Arzt-Patient-Beziehung oder ehemalige Patient:innen mit noch aktuellen Kontaktdaten erreichen (hellgrüne Kreise). Es kann davon ausgegangen werden, dass Praxen den Bedarf von mehrsprachigem Einladungs- und Aufklärungsmaterial einschätzen können. Eine Evaluation der individuellen Impfinanspruchnahme kann für alle Patient:innen mit einer aktiven Arzt-Patient-Beziehung erfolgen. Es gilt aber zu beachten, dass nicht der gesamten Bevölkerung ein pädiatrisches Angebot zur Verfügung steht: In einer Umfrage des Deutschen Kinderhilfswerks von 2018 gaben 34 % der Eltern an, keine ausreichende pädiatrische Versorgung in ihrer Nähe zu haben . In der KiGGS-Welle 2 (2014-2017) gaben 12 % der Kinder und Jugendlichen an, dass sie im letzten Jahr keine ambulante pädiatrische Versorgung in Anspruch genommen hatten ; in der Online-Befragung der Eltern im Rahmen von InveSt HPV betraf dies 7 % der 9‑ bis 14-Jährigen . Gesetzliche Krankenkassen und private Krankenversicherungen. Mit Stand 2024 gab es in Deutschland etwa 130 Krankenkassen und -versicherungen . Entsprechend der Krankenversicherungspflicht für alle Bürger:innen mit Wohnsitz in Deutschland sind laut Mikrozensusdaten 99,9 % der Bevölkerung in Deutschland krankenversichert . Die potenzielle Reichweite von Krankenkassen und -versicherungen wird in Abb. c veranschaulicht. Krankenkassen und -versicherungen verfügen über aktuelle Daten für nahezu die gesamte Zielgruppe: Ihnen liegen u. a. Daten zu Alter, Impfstatus und Nationalität der versicherten Person vor, sodass auch der Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial abgeschätzt werden könnte. Für die Evaluation der individuellen Inanspruchnahme von Leistungen können Abrechnungsdaten für ärztliche Leistungen (mit zeitlichem Verzug) genutzt werden. Es muss jedoch darauf hingewiesen werden, dass von einigen Krankenkassen starke Bedenken geäußert wurden, ob Evaluationen der Inanspruchnahme von Leistungen für personalisierte Erinnerungen juristisch zulässig sind. Darauf weisen auch die hier vorgestellten Befragungsergebnisse bei den Krankenkassen hin, in der ein Großteil der Krankenkassenvertreter:innen angab, den Impfstatus nicht ermitteln zu können. Obwohl im März 2024 das GDNG in Kraft treten sollte, das durch Einführung des § 25b SGB V den Kranken- und Pflegekassen die Möglichkeit zur datengestützten Auswertung zur Erkennung individueller Gesundheitsrisiken einräumt , wurden weiterhin Bedenken geäußert. Aus diesem Grund gab es in dem anstehenden Workshop einen eigenen Evidenzimpuls zur juristischen Beurteilung der rechtlichen Grundlagen für personalisierte Impferinnerungen von gesetzlichen Krankenkassen und privaten Krankenversicherungen . Durchführung des Workshops Am 12. und 13.04.2024 wurde in Berlin ein Workshop mit relevanten Akteur:innen aus der Gesundheitsversorgung durchgeführt. Ziel war es, Konzeptvorschläge für ein Einladungs- und Impferinnerungssystem in Deutschland zu erarbeiten. Am Workshop nahmen Vertreter:innen von gesetzlichen Krankenkassen (GKV) und privaten Krankenversicherungen (PKV), dem GKV-Spitzenverband, dem BVKJ, der Deutschen Gesellschaft für Allgemeinmedizin und Familienmedizin (DEGAM), dem Bundesministerium für Gesundheit (BMG), den Ländern, der Bundeszentrale für gesundheitliche Aufklärung (BZgA), der Nationalen Lenkungsgruppe Impfen (NaLI), dem Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen (IQWiG) und dem Leibniz-Institut für Präventionsforschung und Epidemiologie (BIPS) teil. Im Anschluss an kurze „Evidenzimpulse“ wurden die Teilnehmenden in Kleingruppen aufgeteilt. Gemeinsam wurde über die aus Sicht der Teilnehmenden wichtigen Elemente für ein Konzept diskutiert. Die Kleingruppen nutzten unterschiedliche Materialien, um das Konzept mit den dazugehörigen Elementen 3‑dimensional sichtbar zu machen (Abb. ). Dabei brachten die Akteur:innen ihre unterschiedlichen Perspektiven und Erfahrungen ein, was v. a. für die Diskussion der Praktikabilität der Konzepte zentral war. Die von den Gruppen entwickelten Konzeptbausteine wurden zum Schluss im Plenum vorgestellt und diskutiert. Mögliche Elemente für Einladungs- und Impferinnerungssysteme in Deutschland Als zentrale Voraussetzung hoben die Teilnehmenden eine kooperative und effiziente Zusammenarbeit der Akteur:innen hervor, um Synergien zu nutzen und mögliche Doppelstrukturen abzubauen oder zu vermeiden. Über die verschiedenen Gruppen hinweg gab es bestimmte Kernelemente, die von allen als wichtig und zielführend für die Erinnerung an die HPV-Impfung erachtet wurden. Das zentralste Element war die Erweiterung des Vorsorgeverständnisses bis zum 18. Geburtstag. Konkret wurde die Etablierung der U11 im Alter von 9–10 Jahren (bzw. der aktuell vom G‑BA geprüften „neuen U10“ im Alter von 9–10 Jahren) und ggf. der J2 als weitere gesetzliche Früherkennungsuntersuchung (und damit Aufnahme in den Leistungskatalog der gesetzlichen Krankenkassen) diskutiert. Diese Gesundheits- oder Früherkennungsuntersuchungen wurden als wichtige strukturierte Praxiskontakte angesehen, um die Zielgruppe zur HPV-Impfung zu beraten bzw. die Impfung zeitgerecht durchzuführen. Die Erweiterung des aktuell bis zur U9 etablierten Systems bietet aufgrund der verlässlich hohen Teilnahmeraten die Chance, alle Eltern und Kinder in die Praxis einzuladen und damit eine Gelegenheit zur HPV-Impfung zu schaffen. Um Eltern die Relevanz der U11 (bzw. der „neuen U10“) und der J1 zu verdeutlichen, sollte das mit der U9 endende „Gelbe Heft“ um diese Untersuchungen erweitert und die Verlängerung des Einladungs- und Rückmeldesystems der Länder bis zur U11 (bzw. zur „neuen U10“) und zur J1 initiiert werden. Weitestgehend einig waren sich die Teilnehmenden auch zum „eImpfpass“ als Teil der „ePA für alle“. Um diese für alle Beteiligten (Patient:innen/Versicherte, Praxispersonal, Krankenkassen) niederschwellig nutzbar zu gestalten, sind Schnittstellen zu anderen Softwaresystemen, wie dem Praxisverwaltungssystem, notwendig. Der eImpfpass kann so ausgestaltet werden, dass Nutzer:innen Erinnerungen an die HPV-Impfung sowie andere Impfungen unter Berücksichtigung von Alter und Impfstatus erhalten. Um die ePA als Grundlage für ein Einladungs- und Impferinnerungssystem nutzen zu können, sollte sich schnellstmöglich auf einen Umsetzungszeitplan geeinigt werden. Für die konkrete Ausgestaltung der Funktionen sollten relevante Akteure einbezogen und die Akzeptanz der Eltern für das System berücksichtigt werden. Wichtige Fragen für eine Impferinnerungsfunktion sind: Welche Informationen enthält eine Erinnerung (z. B. Termin, Aufklärungsmaterialien)? Wie können – im Sinne des Equity-Gedankens – Personen eingeladen und ggf. erinnert werden, die die ePA durch Opt-out nicht nutzen?
In Vorbereitung auf den Workshop wurde – neben den Befragungen – vom Projektteam weitere Evidenz zusammengetragen, um das Thema Einladungs- und Impferinnerungssysteme aus unterschiedlichen Blickwinkeln zu beleuchten. Den teilnehmenden Akteur:innen des Workshops sollten die Evidenzimpulse als Grundlage für eine informierte Diskussion dienen. Alle Impulse sind auf der Projektwebseite veröffentlicht . Nachfolgend werden einige Inhalte der Impulse näher beschrieben, die aus Sicht der Autor:innen zentral sind, um über mögliche Konzepte für ein zukünftiges Einladungs- und Impferinnerungssystem in Deutschland zu diskutieren. Etablierte Einladungs- und Erinnerungssysteme im Kinder- und Jugendalter in Deutschland Auch wenn es in Deutschland kein strukturiertes Impfsystem gibt, findet sich in den meisten Bundesländern ein strukturiertes, gesetzlich geregeltes und verbindliches Einladungswesen für die Gesundheits- oder Früherkennungsuntersuchungen im Kindesalter (U1–U9; ). Unabhängig hiervon erfolgen Einladungen ggf. zusätzlich über die versorgende Arztpraxis oder jeweilige gesetzliche Krankenkasse bzw. private Krankenversicherung. Daten der letzten KiGGS-Welle 2 (2014–2017) zeigten, dass 97,2 % der Kinder die U3 bis U9 (ohne U7a) vollständig in Anspruch genommen hatten. Dieser Anteil lag unabhängig vom Geschlecht (Mädchen/Jungen), SES (niedrig/mittel/hoch) und Migrationshintergrund (ohne/einseitig/beidseitig) bei > 90 % . Gleichzeitig fanden sich in den letzten Jahren konstant hohe Impfquoten (vollständige Impfserie) für die von der STIKO empfohlenen Standardimpfungen zum Zeitpunkt der Schuleingangsuntersuchung von > 80 % (z. B. Varizellen, Pneumokokken, Hepatitis B) bzw. > 90 % (Masern-Mumps-Röteln, Tetanus-Diphtherie-Pertussis-Polio-Hib, Meningokokken C; ). Die Schuleingangsuntersuchung im Alter von 5–6 Jahren folgt zeitlich kurz nach der U9 (60.–64. Lebensmonat; ). Im Gegensatz zu den U‑Untersuchungen besteht für die J1 im Alter von 12–14 Jahren kein vergleichbares Einladungswesen . Letzte Analysen der Impfsurveillance aus Daten der kassenärztlichen Vereinigungen (KVen) zeigten J1-Teilnahmeraten zwischen 41 % und 46 % für die Geburtskohorten 2004–2007 (unveröffentlichte Daten, ). Die Teilnahmeraten waren vergleichbar für Jungen und Mädchen. Gleichzeitig zeigten 2 Studien aus Deutschland eine Assoziation von Praxiskontakt und HPV-Impfinanspruchnahme: Mädchen mit J1-Teilnahme im Alter von 12 Jahren hatten eine 7‑fach höhere Wahrscheinlichkeit, eine HPV-Impfung erhalten zu haben, als ohne J1-Teilnahme . Mädchen mit U11-Teilnahme (9–10 Jahre) hatten eine deutlich höhere Chance, eine HPV-Impfung bekommen zu haben, als diejenigen, die die U11 nicht in Anspruch genommen hatten . Die U11 ist aktuell kein Teil des Leistungskataloges der gesetzlichen Krankenkassen, jedoch prüft der Gemeinsame Bundesausschuss (G-BA) derzeit die Einführung einer zusätzlichen Früherkennungsuntersuchung für 9‑ bis 10-Jährige („neue U10“; ). Impfen digital Die Umsetzung von Impfempfehlungen kann durch verschiedene digitale Lösungen unterstützt werden, die gleichzeitig auch Impferinnerungen ermöglichen. Nachfolgend werden 2 digitale Lösungsansätze, die elektronische Patientenakte (ePA) und die Praxis-App „Meine pädiatrische Praxis“, exemplarisch vorgestellt, weitere Ansätze können z. B. das elektronische Impfmanagement in der Praxis sein . Die elektronische Patientenakte (ePA) ist ein digitaler Speicherplatz für medizinische Unterlagen, den die Versicherten mithilfe von Apps ihrer gesetzlichen Krankenkasse einsehen und verwalten können. Die Apps haben zum Zeitpunkt, als das Manuskript verfasst wurde, identische Grundfunktionen, aber unterschiedliche Zusatzfunktionen. Initial enthält die ePA Medikationspläne, Arztbriefe, stationäre Entlassungsbriefe und Befundberichte in Form von medizinischen Informationsobjekten (MIO; ). Die ePA soll schrittweise erweitert werden, sodass perspektivisch auch Daten zu Impfungen im Rahmen eines elektronischen Impfpasses (eImpfpass) digital verfügbar gemacht werden . Um die ePA nutzen zu können, ist von ärztlicher Seite eine Anbindung an die Telematikinfrastruktur notwendig, jedoch berichten etwa 2/3 von wöchentlichen oder täglichen Problemen mit ebendieser . Vonseiten der Patient:innen werden eine „GesundheitsID“ und ein Zugang zu digitaler Infrastruktur benötigt. Seit dem 15.01.2025 ist die „ePA für alle“ nach Opt-out-Konzept vorgesehen. Versicherte erhalten automatisch eine ePA von ihrer gesetzlichen Krankenkasse und in der Gesundheitsversorgung kann darauf ohne erneute Freigabe der Versicherten zugegriffen werden – außer sie widersprechen aktiv. Der Bund hat sich als Ziel gesetzt, dass 80 % der gesetzlich Versicherten die ePA nutzen . Umsetzungszeitplan und in Teilen auch die konkrete Ausgestaltung, wie z. B. eine mögliche Erinnerungsfunktion, sind jedoch aktuell noch ungeklärt. Die Praxis-App Meine pädiatrische Praxis (vormals „Mein Kinder- und Jugendarzt“) wurde von einem privaten Anbieter in Zusammenarbeit mit dem BVKJ entwickelt und wird mittlerweile von > 1200 Praxen bzw. etwa 40 % aller Kinder- und Jugendärzt:innen genutzt (persönliche Kommunikation, BVKJ). Praxen können sich gegen Gebühr für die App-Nutzung registrieren, müssen die App aber selbst aktiv bespielen. Die App bietet verschiedene Funktionen: Unter anderem können Eltern an Termine, Vorsorgeuntersuchungen und Impfungen erinnert werden. Für Impfungen können sogenannte Impfsteckbriefe mit Erinnerung versandt werden. Eltern können die App nur nutzen, wenn die versorgende Praxis registriert ist, und müssen die Daten zum Kind selbst eingeben. Impferinnerungen finden auf Grundlage des von den Eltern eingegebenen Alters statt. Personalisierte Impferinnerungen (d. h. in Abhängigkeit vom tatsächlichen Impfstatus) können (bisher) nicht versandt werden, da es derzeit keine Schnittstelle zur Praxisverwaltungssoftware gibt .
Auch wenn es in Deutschland kein strukturiertes Impfsystem gibt, findet sich in den meisten Bundesländern ein strukturiertes, gesetzlich geregeltes und verbindliches Einladungswesen für die Gesundheits- oder Früherkennungsuntersuchungen im Kindesalter (U1–U9; ). Unabhängig hiervon erfolgen Einladungen ggf. zusätzlich über die versorgende Arztpraxis oder jeweilige gesetzliche Krankenkasse bzw. private Krankenversicherung. Daten der letzten KiGGS-Welle 2 (2014–2017) zeigten, dass 97,2 % der Kinder die U3 bis U9 (ohne U7a) vollständig in Anspruch genommen hatten. Dieser Anteil lag unabhängig vom Geschlecht (Mädchen/Jungen), SES (niedrig/mittel/hoch) und Migrationshintergrund (ohne/einseitig/beidseitig) bei > 90 % . Gleichzeitig fanden sich in den letzten Jahren konstant hohe Impfquoten (vollständige Impfserie) für die von der STIKO empfohlenen Standardimpfungen zum Zeitpunkt der Schuleingangsuntersuchung von > 80 % (z. B. Varizellen, Pneumokokken, Hepatitis B) bzw. > 90 % (Masern-Mumps-Röteln, Tetanus-Diphtherie-Pertussis-Polio-Hib, Meningokokken C; ). Die Schuleingangsuntersuchung im Alter von 5–6 Jahren folgt zeitlich kurz nach der U9 (60.–64. Lebensmonat; ). Im Gegensatz zu den U‑Untersuchungen besteht für die J1 im Alter von 12–14 Jahren kein vergleichbares Einladungswesen . Letzte Analysen der Impfsurveillance aus Daten der kassenärztlichen Vereinigungen (KVen) zeigten J1-Teilnahmeraten zwischen 41 % und 46 % für die Geburtskohorten 2004–2007 (unveröffentlichte Daten, ). Die Teilnahmeraten waren vergleichbar für Jungen und Mädchen. Gleichzeitig zeigten 2 Studien aus Deutschland eine Assoziation von Praxiskontakt und HPV-Impfinanspruchnahme: Mädchen mit J1-Teilnahme im Alter von 12 Jahren hatten eine 7‑fach höhere Wahrscheinlichkeit, eine HPV-Impfung erhalten zu haben, als ohne J1-Teilnahme . Mädchen mit U11-Teilnahme (9–10 Jahre) hatten eine deutlich höhere Chance, eine HPV-Impfung bekommen zu haben, als diejenigen, die die U11 nicht in Anspruch genommen hatten . Die U11 ist aktuell kein Teil des Leistungskataloges der gesetzlichen Krankenkassen, jedoch prüft der Gemeinsame Bundesausschuss (G-BA) derzeit die Einführung einer zusätzlichen Früherkennungsuntersuchung für 9‑ bis 10-Jährige („neue U10“; ).
Die Umsetzung von Impfempfehlungen kann durch verschiedene digitale Lösungen unterstützt werden, die gleichzeitig auch Impferinnerungen ermöglichen. Nachfolgend werden 2 digitale Lösungsansätze, die elektronische Patientenakte (ePA) und die Praxis-App „Meine pädiatrische Praxis“, exemplarisch vorgestellt, weitere Ansätze können z. B. das elektronische Impfmanagement in der Praxis sein . Die elektronische Patientenakte (ePA) ist ein digitaler Speicherplatz für medizinische Unterlagen, den die Versicherten mithilfe von Apps ihrer gesetzlichen Krankenkasse einsehen und verwalten können. Die Apps haben zum Zeitpunkt, als das Manuskript verfasst wurde, identische Grundfunktionen, aber unterschiedliche Zusatzfunktionen. Initial enthält die ePA Medikationspläne, Arztbriefe, stationäre Entlassungsbriefe und Befundberichte in Form von medizinischen Informationsobjekten (MIO; ). Die ePA soll schrittweise erweitert werden, sodass perspektivisch auch Daten zu Impfungen im Rahmen eines elektronischen Impfpasses (eImpfpass) digital verfügbar gemacht werden . Um die ePA nutzen zu können, ist von ärztlicher Seite eine Anbindung an die Telematikinfrastruktur notwendig, jedoch berichten etwa 2/3 von wöchentlichen oder täglichen Problemen mit ebendieser . Vonseiten der Patient:innen werden eine „GesundheitsID“ und ein Zugang zu digitaler Infrastruktur benötigt. Seit dem 15.01.2025 ist die „ePA für alle“ nach Opt-out-Konzept vorgesehen. Versicherte erhalten automatisch eine ePA von ihrer gesetzlichen Krankenkasse und in der Gesundheitsversorgung kann darauf ohne erneute Freigabe der Versicherten zugegriffen werden – außer sie widersprechen aktiv. Der Bund hat sich als Ziel gesetzt, dass 80 % der gesetzlich Versicherten die ePA nutzen . Umsetzungszeitplan und in Teilen auch die konkrete Ausgestaltung, wie z. B. eine mögliche Erinnerungsfunktion, sind jedoch aktuell noch ungeklärt. Die Praxis-App Meine pädiatrische Praxis (vormals „Mein Kinder- und Jugendarzt“) wurde von einem privaten Anbieter in Zusammenarbeit mit dem BVKJ entwickelt und wird mittlerweile von > 1200 Praxen bzw. etwa 40 % aller Kinder- und Jugendärzt:innen genutzt (persönliche Kommunikation, BVKJ). Praxen können sich gegen Gebühr für die App-Nutzung registrieren, müssen die App aber selbst aktiv bespielen. Die App bietet verschiedene Funktionen: Unter anderem können Eltern an Termine, Vorsorgeuntersuchungen und Impfungen erinnert werden. Für Impfungen können sogenannte Impfsteckbriefe mit Erinnerung versandt werden. Eltern können die App nur nutzen, wenn die versorgende Praxis registriert ist, und müssen die Daten zum Kind selbst eingeben. Impferinnerungen finden auf Grundlage des von den Eltern eingegebenen Alters statt. Personalisierte Impferinnerungen (d. h. in Abhängigkeit vom tatsächlichen Impfstatus) können (bisher) nicht versandt werden, da es derzeit keine Schnittstelle zur Praxisverwaltungssoftware gibt .
Für das Konzept eines strukturierten Einladungs- und Impferinnerungssystems sind 2 Fragen zentral: Wer kann mit welchen Daten wen erreichen? Mit welchen Daten kann eine Inanspruchnahme der Leistung evaluiert werden? Diese Fragen sind v. a. im Hinblick auf Zugangsgerechtigkeit (Equity) zentral . Die Abb. a–c zeigen die mögliche Reichweite von Einladungen zur HPV-Impfung und einer Evaluation der Inanspruchnahme für die 3 wichtigsten Akteure Öffentlicher Gesundheitsdienst (ÖGD), Praxen und Krankenkassen (bzw. -versicherungen; ). Öffentlicher Gesundheitsdienst. Für eine postalische Kontaktaufnahme stehen dem ÖGD Daten aus den Einwohnermeldeämtern zur Verfügung (Abb. a). Diese könnten auch Daten zur Nationalität der zu kontaktierenden Person enthalten, die einen wichtigen Hinweis auf (weitere) gesprochene Sprachen und einen möglichen Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial geben könnten. Auf Daten, die eine Evaluation der Inanspruchnahme ermöglichen würden, hat der ÖGD keinen Zugriff. Ärztliche Praxen. Einladungen zur HPV-Impfung durch ärztliche Praxen (Abb. b) setzen aktuelle Kontaktdaten und Einwilligungserklärungen voraus, die am ehesten bei einer aktiven Arzt-Patient-Beziehung vorhanden sind (dunkelgrüne Kreise). Gegebenenfalls lassen sich auch Patient:innen mit inaktiver Arzt-Patient-Beziehung oder ehemalige Patient:innen mit noch aktuellen Kontaktdaten erreichen (hellgrüne Kreise). Es kann davon ausgegangen werden, dass Praxen den Bedarf von mehrsprachigem Einladungs- und Aufklärungsmaterial einschätzen können. Eine Evaluation der individuellen Impfinanspruchnahme kann für alle Patient:innen mit einer aktiven Arzt-Patient-Beziehung erfolgen. Es gilt aber zu beachten, dass nicht der gesamten Bevölkerung ein pädiatrisches Angebot zur Verfügung steht: In einer Umfrage des Deutschen Kinderhilfswerks von 2018 gaben 34 % der Eltern an, keine ausreichende pädiatrische Versorgung in ihrer Nähe zu haben . In der KiGGS-Welle 2 (2014-2017) gaben 12 % der Kinder und Jugendlichen an, dass sie im letzten Jahr keine ambulante pädiatrische Versorgung in Anspruch genommen hatten ; in der Online-Befragung der Eltern im Rahmen von InveSt HPV betraf dies 7 % der 9‑ bis 14-Jährigen . Gesetzliche Krankenkassen und private Krankenversicherungen. Mit Stand 2024 gab es in Deutschland etwa 130 Krankenkassen und -versicherungen . Entsprechend der Krankenversicherungspflicht für alle Bürger:innen mit Wohnsitz in Deutschland sind laut Mikrozensusdaten 99,9 % der Bevölkerung in Deutschland krankenversichert . Die potenzielle Reichweite von Krankenkassen und -versicherungen wird in Abb. c veranschaulicht. Krankenkassen und -versicherungen verfügen über aktuelle Daten für nahezu die gesamte Zielgruppe: Ihnen liegen u. a. Daten zu Alter, Impfstatus und Nationalität der versicherten Person vor, sodass auch der Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial abgeschätzt werden könnte. Für die Evaluation der individuellen Inanspruchnahme von Leistungen können Abrechnungsdaten für ärztliche Leistungen (mit zeitlichem Verzug) genutzt werden. Es muss jedoch darauf hingewiesen werden, dass von einigen Krankenkassen starke Bedenken geäußert wurden, ob Evaluationen der Inanspruchnahme von Leistungen für personalisierte Erinnerungen juristisch zulässig sind. Darauf weisen auch die hier vorgestellten Befragungsergebnisse bei den Krankenkassen hin, in der ein Großteil der Krankenkassenvertreter:innen angab, den Impfstatus nicht ermitteln zu können. Obwohl im März 2024 das GDNG in Kraft treten sollte, das durch Einführung des § 25b SGB V den Kranken- und Pflegekassen die Möglichkeit zur datengestützten Auswertung zur Erkennung individueller Gesundheitsrisiken einräumt , wurden weiterhin Bedenken geäußert. Aus diesem Grund gab es in dem anstehenden Workshop einen eigenen Evidenzimpuls zur juristischen Beurteilung der rechtlichen Grundlagen für personalisierte Impferinnerungen von gesetzlichen Krankenkassen und privaten Krankenversicherungen .
Für eine postalische Kontaktaufnahme stehen dem ÖGD Daten aus den Einwohnermeldeämtern zur Verfügung (Abb. a). Diese könnten auch Daten zur Nationalität der zu kontaktierenden Person enthalten, die einen wichtigen Hinweis auf (weitere) gesprochene Sprachen und einen möglichen Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial geben könnten. Auf Daten, die eine Evaluation der Inanspruchnahme ermöglichen würden, hat der ÖGD keinen Zugriff.
Einladungen zur HPV-Impfung durch ärztliche Praxen (Abb. b) setzen aktuelle Kontaktdaten und Einwilligungserklärungen voraus, die am ehesten bei einer aktiven Arzt-Patient-Beziehung vorhanden sind (dunkelgrüne Kreise). Gegebenenfalls lassen sich auch Patient:innen mit inaktiver Arzt-Patient-Beziehung oder ehemalige Patient:innen mit noch aktuellen Kontaktdaten erreichen (hellgrüne Kreise). Es kann davon ausgegangen werden, dass Praxen den Bedarf von mehrsprachigem Einladungs- und Aufklärungsmaterial einschätzen können. Eine Evaluation der individuellen Impfinanspruchnahme kann für alle Patient:innen mit einer aktiven Arzt-Patient-Beziehung erfolgen. Es gilt aber zu beachten, dass nicht der gesamten Bevölkerung ein pädiatrisches Angebot zur Verfügung steht: In einer Umfrage des Deutschen Kinderhilfswerks von 2018 gaben 34 % der Eltern an, keine ausreichende pädiatrische Versorgung in ihrer Nähe zu haben . In der KiGGS-Welle 2 (2014-2017) gaben 12 % der Kinder und Jugendlichen an, dass sie im letzten Jahr keine ambulante pädiatrische Versorgung in Anspruch genommen hatten ; in der Online-Befragung der Eltern im Rahmen von InveSt HPV betraf dies 7 % der 9‑ bis 14-Jährigen .
Mit Stand 2024 gab es in Deutschland etwa 130 Krankenkassen und -versicherungen . Entsprechend der Krankenversicherungspflicht für alle Bürger:innen mit Wohnsitz in Deutschland sind laut Mikrozensusdaten 99,9 % der Bevölkerung in Deutschland krankenversichert . Die potenzielle Reichweite von Krankenkassen und -versicherungen wird in Abb. c veranschaulicht. Krankenkassen und -versicherungen verfügen über aktuelle Daten für nahezu die gesamte Zielgruppe: Ihnen liegen u. a. Daten zu Alter, Impfstatus und Nationalität der versicherten Person vor, sodass auch der Bedarf für mehrsprachiges Einladungs- und Aufklärungsmaterial abgeschätzt werden könnte. Für die Evaluation der individuellen Inanspruchnahme von Leistungen können Abrechnungsdaten für ärztliche Leistungen (mit zeitlichem Verzug) genutzt werden. Es muss jedoch darauf hingewiesen werden, dass von einigen Krankenkassen starke Bedenken geäußert wurden, ob Evaluationen der Inanspruchnahme von Leistungen für personalisierte Erinnerungen juristisch zulässig sind. Darauf weisen auch die hier vorgestellten Befragungsergebnisse bei den Krankenkassen hin, in der ein Großteil der Krankenkassenvertreter:innen angab, den Impfstatus nicht ermitteln zu können. Obwohl im März 2024 das GDNG in Kraft treten sollte, das durch Einführung des § 25b SGB V den Kranken- und Pflegekassen die Möglichkeit zur datengestützten Auswertung zur Erkennung individueller Gesundheitsrisiken einräumt , wurden weiterhin Bedenken geäußert. Aus diesem Grund gab es in dem anstehenden Workshop einen eigenen Evidenzimpuls zur juristischen Beurteilung der rechtlichen Grundlagen für personalisierte Impferinnerungen von gesetzlichen Krankenkassen und privaten Krankenversicherungen .
Am 12. und 13.04.2024 wurde in Berlin ein Workshop mit relevanten Akteur:innen aus der Gesundheitsversorgung durchgeführt. Ziel war es, Konzeptvorschläge für ein Einladungs- und Impferinnerungssystem in Deutschland zu erarbeiten. Am Workshop nahmen Vertreter:innen von gesetzlichen Krankenkassen (GKV) und privaten Krankenversicherungen (PKV), dem GKV-Spitzenverband, dem BVKJ, der Deutschen Gesellschaft für Allgemeinmedizin und Familienmedizin (DEGAM), dem Bundesministerium für Gesundheit (BMG), den Ländern, der Bundeszentrale für gesundheitliche Aufklärung (BZgA), der Nationalen Lenkungsgruppe Impfen (NaLI), dem Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen (IQWiG) und dem Leibniz-Institut für Präventionsforschung und Epidemiologie (BIPS) teil. Im Anschluss an kurze „Evidenzimpulse“ wurden die Teilnehmenden in Kleingruppen aufgeteilt. Gemeinsam wurde über die aus Sicht der Teilnehmenden wichtigen Elemente für ein Konzept diskutiert. Die Kleingruppen nutzten unterschiedliche Materialien, um das Konzept mit den dazugehörigen Elementen 3‑dimensional sichtbar zu machen (Abb. ). Dabei brachten die Akteur:innen ihre unterschiedlichen Perspektiven und Erfahrungen ein, was v. a. für die Diskussion der Praktikabilität der Konzepte zentral war. Die von den Gruppen entwickelten Konzeptbausteine wurden zum Schluss im Plenum vorgestellt und diskutiert.
Als zentrale Voraussetzung hoben die Teilnehmenden eine kooperative und effiziente Zusammenarbeit der Akteur:innen hervor, um Synergien zu nutzen und mögliche Doppelstrukturen abzubauen oder zu vermeiden. Über die verschiedenen Gruppen hinweg gab es bestimmte Kernelemente, die von allen als wichtig und zielführend für die Erinnerung an die HPV-Impfung erachtet wurden. Das zentralste Element war die Erweiterung des Vorsorgeverständnisses bis zum 18. Geburtstag. Konkret wurde die Etablierung der U11 im Alter von 9–10 Jahren (bzw. der aktuell vom G‑BA geprüften „neuen U10“ im Alter von 9–10 Jahren) und ggf. der J2 als weitere gesetzliche Früherkennungsuntersuchung (und damit Aufnahme in den Leistungskatalog der gesetzlichen Krankenkassen) diskutiert. Diese Gesundheits- oder Früherkennungsuntersuchungen wurden als wichtige strukturierte Praxiskontakte angesehen, um die Zielgruppe zur HPV-Impfung zu beraten bzw. die Impfung zeitgerecht durchzuführen. Die Erweiterung des aktuell bis zur U9 etablierten Systems bietet aufgrund der verlässlich hohen Teilnahmeraten die Chance, alle Eltern und Kinder in die Praxis einzuladen und damit eine Gelegenheit zur HPV-Impfung zu schaffen. Um Eltern die Relevanz der U11 (bzw. der „neuen U10“) und der J1 zu verdeutlichen, sollte das mit der U9 endende „Gelbe Heft“ um diese Untersuchungen erweitert und die Verlängerung des Einladungs- und Rückmeldesystems der Länder bis zur U11 (bzw. zur „neuen U10“) und zur J1 initiiert werden. Weitestgehend einig waren sich die Teilnehmenden auch zum „eImpfpass“ als Teil der „ePA für alle“. Um diese für alle Beteiligten (Patient:innen/Versicherte, Praxispersonal, Krankenkassen) niederschwellig nutzbar zu gestalten, sind Schnittstellen zu anderen Softwaresystemen, wie dem Praxisverwaltungssystem, notwendig. Der eImpfpass kann so ausgestaltet werden, dass Nutzer:innen Erinnerungen an die HPV-Impfung sowie andere Impfungen unter Berücksichtigung von Alter und Impfstatus erhalten. Um die ePA als Grundlage für ein Einladungs- und Impferinnerungssystem nutzen zu können, sollte sich schnellstmöglich auf einen Umsetzungszeitplan geeinigt werden. Für die konkrete Ausgestaltung der Funktionen sollten relevante Akteure einbezogen und die Akzeptanz der Eltern für das System berücksichtigt werden. Wichtige Fragen für eine Impferinnerungsfunktion sind: Welche Informationen enthält eine Erinnerung (z. B. Termin, Aufklärungsmaterialien)? Wie können – im Sinne des Equity-Gedankens – Personen eingeladen und ggf. erinnert werden, die die ePA durch Opt-out nicht nutzen?
Verschiedene Studien haben bereits gezeigt, dass Erinnerungssysteme einen positiven Effekt auf Impfquoten haben. Auch die im Rahmen des Projektes durchgeführten Befragungen belegen für Deutschland einen Zusammenhang von Erinnerungssystemen und HPV-Impfstatus: Kinder, deren Eltern an die HPV-Impfung erinnert wurden, waren signifikant häufiger gegen HPV geimpft. Die Befragungsergebnisse machen jedoch auch deutlich, dass HPV-Impferinnerungen in Deutschland derzeit nicht systematisch erfolgen und zumeist an eine aktive Arzt-Patient-Beziehung mit Praxisbesuchen geknüpft sind. Kinder, die nicht oder nicht regelmäßig kinderärztlich versorgt werden, sind damit bisher meist von Impferinnerungen ausgeschlossen. Gesetzliche Krankenkassen gaben an, dass zwar Einladungs- und Erinnerungssysteme grundsätzlich genutzt werden, jedoch vorrangig für Früherkennungsuntersuchungen. Zum Zeitpunkt der Befragung (vor Inkrafttreten des GDNG) evaluierten Krankenkassen nicht, ob die Einladungen/Erinnerungen zu einer Inanspruchnahme der Leistung führten. Zudem gaben fast alle Krankenkassen mit HPV-Impferinnerung an, den HPV-Impfstatus ihrer Versicherten nicht ermitteln zu können – was personalisierte Impferinnerungen durch Krankenkassen per se ausschließen würde. Diese Einschätzung wurde während des Workshops trotz zwischenzeitlichen Inkrafttretens des GDNG von Krankenkassenvertreter:innen teilweise weiterhin vertreten. Um Krankenkassen und -versicherungen als Akteure für personalisierte Einladungs- und Impferinnerungssysteme berücksichtigen zu können, müssten ggf. noch bestehende offene rechtliche Unsicherheiten oder Vorbehalte gegenüber der Nutzung von Impfdaten für eine Erinnerung der Versicherten geklärt bzw. ausgeräumt werden. Die Ergebnisse zeigen ebenfalls, dass bei allen Überlegungen über zukünftige Konzepte für Einladungs- und Impferinnerungssysteme zwingend berücksichtigt werden muss, welche Akteure alle Personen in der Zielgruppe sicher und unabhängig von Merkmalen, wie z. B. SES, Wohnort oder Versicherungsart, erreichen können. Nur so kann Zugangsgerechtigkeit (Equity) sichergestellt werden. Ein pragmatischer Ansatz, der auch im Rahmen des Projekt-Workshops Konsens fand, könnte darin bestehen, ein Einladungs- und Impferinnerungssystem für das Kinder- und Jugendalter an das bereits etablierte und durch hohe Teilnahmeraten erfolgreiche System der Früherkennungsuntersuchungen bis zum Schuleintritt U3–U9 anzukoppeln. Dies beinhaltet die Aufnahme der U11 (bzw. der „neuen U10“) in den Leistungskatalog der Krankenkassen und die Erweiterung des strukturierten, gesetzlich geregelten und verbindlichen Einladungswesens in den Ländern um die U11 (bzw. „neue U10“) und die J1. Damit einhergehen sollte die Erweiterung des „Gelben Heftes“ um die 3 bis zum 18. Geburtstag vorhandenen Vorsorgeuntersuchungen U11 (bzw. „neue U10“), J1 und J2. So können verlässliche Praxiskontakte und zeitgerechte Impfangebote auch im Jugendalter geschaffen werden. Studien haben bereits gezeigt, dass die Inanspruchnahme der J1-Untersuchung mit einer höheren Wahrscheinlichkeit für die HPV-Impfung einhergeht. Perspektivisch werden in den nächsten Jahren flächendeckende digitale Möglichkeiten, wie z. B. „ePA“ und „eImpfpass“, neue Möglichkeiten für Einladungs- und Impferinnerungssysteme bieten. Die konkrete Ausgestaltung sollte unter Einbeziehung der relevanten Akteure erfolgen und berücksichtigen, dass eine hohe Akzeptanz des digitalen Systems entscheidend für seine Nutzung und damit auch seine Wirkung ist. Dabei sollten auch Erfahrungen bestehender und funktionierender, wenn auch nicht flächendeckender Systeme berücksichtigt werden. Gerade Erwachsene und Senior:innen mit häufig niedrigen Impfquoten für STIKO-empfohlene Standard- und Indikationsimpfungen sind eine weitere wichtige Zielgruppe, die in einem nationalen Konzept für ein Einladungs- und Impferinnerungssystem speziell mitgedacht werden muss. Die Implementierung so eines strukturierten nationalen Systems ist eine vielversprechende Strategie, um nicht nur die Inanspruchnahme der HPV-Impfung, sondern aller STIKO-empfohlenen Impfungen zu fördern und gleichzeitig einen gerechten Zugang zu Impfangeboten für alle Personen in Deutschland zu gewährleisten.
|
Blue Light-Induced Mitochondrial Oxidative Damage Underlay Retinal Pigment Epithelial Cell Apoptosis | b2eb43eb-0308-4f94-aa42-cba3df85d40a | 11641757 | Biochemistry[mh] | Age-related macular degeneration (AMD) is a progressive degenerative disease affecting the macula with subsequent irreversible vision loss [ , , ]. Aside from genetic predisposing mutations, inflammation, smoking, and diet, the primary risk factor for disease development remains age, as it reaches 4% of individuals below the age of 50 but more than 27% of 80-year-old people . Affected individuals suffer from considerable deterioration of their sharp central vision. This results from gradual failure of the Bruch’s membrane, choroidal capillaries, and retinal pigment epithelial (RPE) cells with ensuing dysfunction of photoreceptors . On the other hand, AMD is linked to high socioeconomic issues, which constitutes a heavy charge on the health system regarding patients’ care . AMD presents in wet form, which accounts for 15% of cases. It is due to growing capillaries that invade the subretinal space with subsequent leakage that induces hemorrhage. Patients in this cohort benefit from treatments that target neovascularization. The disease could also develop as a non-exudative dry form that concerns 85% of cases. This form develops following (i) drusen deposition underneath the macula between the Bruch’s membrane and the RPE layer and (ii) lipofuscin accumulation in RPE cells. For this form of the disease, there is no approved treatment, which requires a better understanding of the disease to identify new therapeutic targets [ , , , , , , ]. Light was reported as another risk factor for the development and progression of AMD, mostly the dry form as RPE cell-stored lipofuscin, increased cell sensitivity to different spectra [ , , ]. Ultraviolet (UV) and blue light (BL) wavelengths induce photooxidative stress and photochemical damage to exposed cells. Specifically, BL triggers RPE cell senescence and death [ , , , , , , , , , , ]. In the eye, the cornea and the lens crystalline block UV spectra. Instead, the BL spectra diffuse through these structures, reach the retina and the underneath tissues, and affect their proper functioning . During aging, the lens progressively turns yellowish, which buffers BL radiation entering the eye and protects the posterior eye structures . Nonetheless, this naturally protective effect is lost during lens removal or replacement [ , , ]. Oxidative stress and apoptosis are linked physiological phenomena. ROS and mitochondria play pivotal roles in the apoptotic cascade induction under both physiological and pathologic conditions [ , , ]. In the context of AMD, we reported that BL induced oxidative stress and subsequent cytotoxicity to cultured human RPE cells and increased drusen deposition that triggered oxidative stress and RPE cell apoptosis in human cadaveric eye specimens [ , , , ]. The use of BL-filtering devices mitigated these effects. Here, we determined the mechanisms behind BL-induced damage to RPE cells. While increased ROS levels did not affect RPE cell proliferation, it induced a significant decrease in mitochondrial membrane potential and an increase in RPE cell apoptosis. BL-induced RPE cell apoptosis resulted from the activation of the caspase cascade in a ROS-dependent manner. The proteomic analyses revealed that BL decreased the expression levels of several ROS detoxifying enzymes in exposed RPE cells that will prolong the oxidative stress in these cells with a maintenance of the BL cytotoxic effects. Together, our findings bring new insights into the involvement of BL on RPE cell damage and its putative role in the progression of AMD. Filtering these radiations or the use of antioxidants are avenues to block or delay BL-mediated RPE cell apoptosis to counteract the disease progression.
2.1. BL-Induced Oxidative Stress in RPE Cells In this study, we used A2E-loaded ARPE-19 cells and human primary RPE cells ( ) . RPE cells were exposed to BL under a Solar Simulator to normalize in vitro light exposure to sunlight reaching the eye in vivo. We found that BL exposure significantly increased the levels of total cellular ROS and mitochondrial superoxide anion in both primary RPE cells and ARPE-19 cells ( and ). These data show that BL induces oxidative stress in human RPE cells. 2.2. BL Is Cytotoxic to RPE Cells in a ROS-Dependent Manner We reported that BL affected RPE cell growth, which might be due to an effect on cell proliferation or cell viability . Here, we first investigated cell proliferation and found that BL did not affect RPE cell proliferation as assessed by cell cycle ( a and ). When we studied the effects of BL on cell viability, we found that it significantly increased RPE cell apoptosis, while it did not affect cell necrosis ( b and ). To determine whether BL-induced cytotoxic effects were linked to increased ROS production, we pretreated cells with the ROS scavenger NAC. We found that quenching ROS production abolished BL-induced apoptosis. This indicates that BL-elicited oxidative stress triggered apoptotic cell death in RPE cells. 2.3. BL Induces ΔΨ M Collapse and Caspase Pathway Activation It is well recognized that mitochondrial respiratory chain, oxidative stress, and cell growth are linked physiological processes [ , , ]. Specifically, ROS production and ΔΨ M defect drive apoptotic cascade induction. We determined the molecular mechanism links underlaying BL-induced oxidative stress and cytotoxic effects in RPE cells. We assessed ΔΨ M and found that BL exposure significantly reduced it by 55% to 60% in primary RPE cells and ARPE-19 cells, respectively ( and ). These effects were reduced following pre-treatment of cells with NAC. As ΔΨ M collapse is accompanied by caspase activation; we verified this apoptosis-inducing pathway. We observed that BL exposure increased by ~2–4 times the levels of activated caspases 9/3/7 ( and ). Interestingly, all these effects were prevented following NAC pre-treatment of RPE cells, which stipulates that BL-induced oxidative stress in RPE cells triggers ΔΨ M collapse and subsequent activation of caspase cascade-mediated apoptosis. 2.4. BL Decreased the Expression of ROS Detoxifying Enzymes in RPE Cells To determine putative factors associated with BL effects on RPE cells, we performed proteomic analyses. We identified 2810 proteins, of which 1404 (50%) were detected in all analyzed RPE samples ( ). High percentages of detected proteins were shared between non-treated cells (71.1–78.3%) and between BL-exposed cells (67.7–79.6%) ( a(i,ii)). In addition, non-exposed and BL-exposed cells shared 2369 proteins, while 288 and 153 proteins were exclusively present in one or other samples, respectively ( a (insert) and ). As a readout for the cell origin of analyzed proteins, we detected a panel of proteins that are specific markers of RPE ( ). Notably, we found that 44 proteins were upregulated and 129 proteins were downregulated in BL-exposed cells ( ). We focused our analyses on factors involved in cellular response to oxidative stress ( ). We found that many ROS detoxifying enzymes were down-regulated in BL-exposed cells ( b,c). To identify physiological processes to which the identified proteins are related, we clustered the most differentially expressed proteins into gene ontology categories. Characterization by biological process highlighted categories consistent with response to oxidative stress and cellular response to stress in proteins down-expressed in BL-treated cells (i.e., ETFDH, GSS, PXDN, and PRDX6; 5.3-fold decrease). In contrast, highly expressed in BL-treated cells clustered in categories consistent with apoptotic signaling and NHEJ-associated DNA repair pathways (i.e., ANXA5, HSPA5, PRKDC, THBS1, SLC25A5, SLC25A6 and TP53BP1; 32.4-fold enrichment) ( ). This is in line with our findings that BL elicited ROS-mediated apoptosis and produced ROS-induced DNA damage that caused the activation of the DNA repair machinery.
In this study, we used A2E-loaded ARPE-19 cells and human primary RPE cells ( ) . RPE cells were exposed to BL under a Solar Simulator to normalize in vitro light exposure to sunlight reaching the eye in vivo. We found that BL exposure significantly increased the levels of total cellular ROS and mitochondrial superoxide anion in both primary RPE cells and ARPE-19 cells ( and ). These data show that BL induces oxidative stress in human RPE cells.
We reported that BL affected RPE cell growth, which might be due to an effect on cell proliferation or cell viability . Here, we first investigated cell proliferation and found that BL did not affect RPE cell proliferation as assessed by cell cycle ( a and ). When we studied the effects of BL on cell viability, we found that it significantly increased RPE cell apoptosis, while it did not affect cell necrosis ( b and ). To determine whether BL-induced cytotoxic effects were linked to increased ROS production, we pretreated cells with the ROS scavenger NAC. We found that quenching ROS production abolished BL-induced apoptosis. This indicates that BL-elicited oxidative stress triggered apoptotic cell death in RPE cells.
M Collapse and Caspase Pathway Activation It is well recognized that mitochondrial respiratory chain, oxidative stress, and cell growth are linked physiological processes [ , , ]. Specifically, ROS production and ΔΨ M defect drive apoptotic cascade induction. We determined the molecular mechanism links underlaying BL-induced oxidative stress and cytotoxic effects in RPE cells. We assessed ΔΨ M and found that BL exposure significantly reduced it by 55% to 60% in primary RPE cells and ARPE-19 cells, respectively ( and ). These effects were reduced following pre-treatment of cells with NAC. As ΔΨ M collapse is accompanied by caspase activation; we verified this apoptosis-inducing pathway. We observed that BL exposure increased by ~2–4 times the levels of activated caspases 9/3/7 ( and ). Interestingly, all these effects were prevented following NAC pre-treatment of RPE cells, which stipulates that BL-induced oxidative stress in RPE cells triggers ΔΨ M collapse and subsequent activation of caspase cascade-mediated apoptosis.
To determine putative factors associated with BL effects on RPE cells, we performed proteomic analyses. We identified 2810 proteins, of which 1404 (50%) were detected in all analyzed RPE samples ( ). High percentages of detected proteins were shared between non-treated cells (71.1–78.3%) and between BL-exposed cells (67.7–79.6%) ( a(i,ii)). In addition, non-exposed and BL-exposed cells shared 2369 proteins, while 288 and 153 proteins were exclusively present in one or other samples, respectively ( a (insert) and ). As a readout for the cell origin of analyzed proteins, we detected a panel of proteins that are specific markers of RPE ( ). Notably, we found that 44 proteins were upregulated and 129 proteins were downregulated in BL-exposed cells ( ). We focused our analyses on factors involved in cellular response to oxidative stress ( ). We found that many ROS detoxifying enzymes were down-regulated in BL-exposed cells ( b,c). To identify physiological processes to which the identified proteins are related, we clustered the most differentially expressed proteins into gene ontology categories. Characterization by biological process highlighted categories consistent with response to oxidative stress and cellular response to stress in proteins down-expressed in BL-treated cells (i.e., ETFDH, GSS, PXDN, and PRDX6; 5.3-fold decrease). In contrast, highly expressed in BL-treated cells clustered in categories consistent with apoptotic signaling and NHEJ-associated DNA repair pathways (i.e., ANXA5, HSPA5, PRKDC, THBS1, SLC25A5, SLC25A6 and TP53BP1; 32.4-fold enrichment) ( ). This is in line with our findings that BL elicited ROS-mediated apoptosis and produced ROS-induced DNA damage that caused the activation of the DNA repair machinery.
BL is a risk factor for AMD . We reported that it induced oxidative stress in RPE cells in vitro and increased drusen deposition that triggered RPE cell apoptosis in human eyes [ , , ]. In this study, we determined molecular mechanisms underlying BL-induced damage to primary human RPE cells. Using a Solar Simulator, we normalized in vitro light exposure to light reaching the retina in vivo . While this remains an artificial model and condition, it helps in understanding the behavior of RPE cells under sunlight-like illumination. BL increased ROS levels in RPE cells, eliciting a collapse in the ΔΨ M , and increasing apoptosis following caspase activation. Also, BL decreased detoxifying enzyme expression, which sustains oxidative stress and cytotoxic effects. Light exposure is toxic to many tissues and underlies many diseases [ , , , , , , , ]. We reported on its involvement in the pathogenesis of uveal melanoma [ , , , ]. It is also responsible for other ocular diseases (cataract and AMD) [ , , ]. While UV radiations are blocked by the cornea and lens, visible light crosses these tissues and reaches posterior eye structures [ , , ]. Of these high-energy radiations, BL displays the most cytotoxic effects on RPE cells [ , , , , , , , , , , ]. We deepen these observations using primary RPE cells from aged donors and by demonstrating a direct link between BL-induced ROS production and RPE cell cytotoxicity. The use of antioxidants rescued RPE cells from BL-induced damage ( ). Notably, following BL exposure, the levels of cellular ROS increased more than seven times. Instead, the levels of the mitochondrial superoxide anion increased by only two times, due to the fact that this unstable intermediate is readily converted to more stable metabolites . Increased ROS levels induce mitochondrial DNA damage and dysfunction, and subsequent cellular damage. This triggers various degenerative pathologies, such as AMD . We found that BL induced ΔΨ M collapse, activated caspase cascade, and caused cell apoptosis in a ROS-dependent manner. Therefore, mitochondrial dysfunction is likely to play an important role in the induction of the observed RPE cell apoptosis due to ROS production. Antioxidant mechanisms are suppressed in A2E-loaded RPE cells . Our proteomic analyses showed that BL significantly reduced the levels of many antioxidant enzymes that might exacerbate RPE cell cytotoxicity. It should be highlighted that ROS may target different cellular components (i.e., proteins, lipids, DNA) to induce, for example, lipid peroxidation or DNA damage that culminate in cell dysfunctions. During aging, RPE cells face different insults, such as light and oxidative stress. BL induces lipofuscin deposition and subretinal drusen accumulation . During the visual cycle, these light-absorbing structures are processed by RPE cells. In order for photoreceptors to work effectively, outer segments need to be replaced daily, and RPE cells act as the recycling station for this phagocytosis-associated process . That way, they ensure that debris does not build up underneath the retina. Phagocytized outer segments are digested in RPE lysosomes, but this reaction is hampered by oxidative stress. Subsequently, undigested residues form A2E-rich lipofuscin that has an absorbance peak at 350–435 nm. This increases RPE cell photosensitization and triggers a vicious circle [ , , , , , ]. In addition, deposition of drusen during aging causes a failure in the hydraulic conductivity and RPE cell malnutrition, with subsequent neurodegeneration . We mimicked this situation in vitro by using ARPE-19 cells charged with synthetic A2E. We found that BL effects on these cells were almost the same as on primary cells. Based on our findings, many therapeutic avenues for AMD are possible. Interventions that counteract oxidative stress were shown to be beneficial in the treatment of many diseases [ , , , ]. Application of this strategy is promising, as antioxidants are already used in ophthalmologic clinics. Alteration of mitochondrial functions suggests that it may be a potential target for disease prevention. Mitochondrial stimulation protects RPE cells from oxidative damage [ , , , , ]. Following cataract surgery, the protective function against BL of the age-associated yellowing lens is lost . The recent use of the BL-filtering intraocular lenses to replace the natural lens crystalline was reported to restore this deficiency by reducing the levels of produced ROS and RPE cell mortality [ , , ]. These devices filter the “bad” BL (below 460 nm) but not the “good” BL (above 460 nm) that is involved in the regulation of the circadian rhythm [ , , ]. Photobiomodulation (PBM), a process that regulates physiological conditions following light exposure, promotes cellular fitness. PBM is currently used in physiotherapy, arthritis, wound repair, and sports medicine [ , , ]. It acts through the activation of the mitochondrial respiratory chain with subsequent normalization of cellular functions (i.e., proliferation, survival, and cytoprotection) . Recently, PBM was shown to have beneficial effects for AMD as it induced a reduction in the size and number of Drusen [ , , ]. Its action could also pass through the regulation of oxidative stress and mitochondrial function at the level of RPE cells.
4.1. Human Eye Procurement for Primary RPE Cell Isolation and Cell Culture Human eyes ( n = 6, 3 males and 3 females, 65–76-years-old) were obtained from the Centre Hospitalier Universitaire de Québec (Canada), following informed consent from the donor’s next of kin, and were used in accordance with a protocol approved by the ethic board of the RI-MUHC (#2019–5314) and with The Code of Ethics of the World Medical Association. Primary RPE cell cultures were established as reported previously . In all experiments, cells were used between the second and fourth passages (exponential growth phase and presence of cytoplasmic pigmented granules) ( ) . ARPE-19 cell line was obtained from Cedarlane (ON., Canada) and was maintained in DMEM-F12 medium supplemented with 10% FBS and antibiotics (Corning, AZ, USA). These cells were used for all experiments at early passages (˂20). As reported previously, ARPE-19 cells were loaded with A2E (20 µM) 24 h before exposure to light . 4.2. Cell Exposure to BL Cells were exposed to BL when they reached 70% confluence. Cultures were maintained in the dark wrapped in aluminum foil at 37 °C and 5% CO 2 . Cell culture medium was removed, replaced with D-PBS supplemented with calcium and magnesium, and cells were exposed under a solar simulator (TSS-156R, OAI, OAInet, Milpitas, CA, USA) set at 30 mW/cm 2 for 30 min in the presence or absence of NAC. NAC (1 mM; Sigma-Aldrich, St. Louis, MO, USA) was added to cells 24 h before exposure to BL and during BL exposure. A Blue Dichroic Filter (Edmund Optics Inc., Bengaluru, India) was used to allow only BL to pass and reach cells ( ). 4.3. Reactive Oxygen Species (ROS) Detection We analyzed both total cellular ROS and mitochondrial superoxide anions using DCF-DA and MitoSox Red probes, respectively, according to manufacturer protocols (ThermoFisher, Waltham, MA, USA). Fluorescence was read using an Infinite M200 Pro microplate reader (Tecan, Mennedorf, Switzerland). Reading parameters were introduced manually to normalize fluorescence measurements between experiments. 4.4. Cell Cycle and Apoptosis Analyses For cell cycle analyses, cells were fixed in ice-cold ethanol (70%) for 2 h and labeled with propidium iodide (PI (50 μg/mL); Sigma Aldrich, St. Louis, MO, USA). Cells were acquired in a BD FACSCanto II flow cytometer at ~400 events/second flow rate. Doublets were excluded by creating a combination of FSC-channel bivariate plots using Area vs. Height parameters. For apoptosis analysis, we used the Alexa Fluor 488 Annexin V/Dead Cell Kit (ThermoFisher, Waltham, MA, USA) following the manufacturer’s instructions. ~20.000 cells were acquired per sample at ~500 events/second rate. Analyses were performed using FlowJo software (version 10.10) . 4.5. Mitochondria Membrane Potential (ΔΨ M ) Measurement ΔΨ M was assessed using the JC-1 probe according to the manufacturer’s instructions (Cayman, MI, USA). Fluorescence was read using the Infinite M200 Pro microplate reader. Two measures were performed at Ex/Em (535/595 nm and 485/535 nm) for red J-aggregates and green monomers, respectively. Data were presented as ratio of J-aggregates to monomer values. 4.6. Western Blot and Mass Spectrometry (MS) Proteomic Analyses, and Database Search For MS analyses, cell samples were resuspended in PBS. Cell preparations for Western Blot were homogenized in RIPA containing protease inhibitors (Sigma Aldrich, St. Louis, MO, USA) at 4°C for 30 min. For immunoblotting, proteins were resolved in precast polyacrylamide gel and transferred to PVDF membranes (Bio-Rad, Hercules, CA, USA). Membranes were probed with rabbit anti-caspase 9 (cleaved Asp353) (ThermoFisher, Waltham, MA, USA) and mouse anti-β-actin (Sigma Aldrich, St. Louis, MO, USA) antibodies, followed by HRP-conjugated goat anti-rabbit and goat anti-mouse antibodies (Sigma Aldrich, Waltham, MA, USA). Protein signals were visualized using ECL prime Western Blot detection (Sigma Aldrich, St. Louis, MO, USA) in a ChemiDoc System (BioRad, Hercules, CA, USA). Densitometric analysis was performed using ImageJ software (version 1.54g). Liquid chromatography-tandem mass spectrometry proteomic analyses were performed on protein samples as previously described . Raw data were converted into *.mgf format (Mascot generic format) to use the Mascot2.6.2 search engine to search against human protein sequences (Uniprot 2019). Database search results were loaded onto Scaffold 4.10.0 for spectral counting, statistical treatment, data visualization and quantification. Samples with low total protein counts and low spectrum counts were excluded from the analyses. A p -value cut-off of 0.05 and a fold-value change of ≥2 were used to identify the differentially expressed proteins. The identified protein list in Scaffold was exported to Microsoft Excel sheets and uploaded into the DAVID Bioinformatics database (v2023q4) for the analysis of functional gene enrichment and annotation (gene ontology analyses). In addition, bioinformatic analyses were performed using the FunRich software (version 3.1.3). 4.7. Caspases 3/7 Activation Analyses Caspase pathway activation was analyzed using the CellEvent Caspase-3/7 probe (ThermoFisher, Waltham, MA, USA) as per the manufacturer’s protocol. Following staining, cells were mounted with coverslips in Mounting Medium with DAPI (Vectorlabs, MA, USA) and visualized using an LSM780 confocal microscope (Zeiss, Jena, Germany). 4.8. Statistical Analyses All experiments were performed with 6 independent primary RPE cell cultures or at least 3 independent ARPE-19 cell cultures. Data were compared using an ANOVA followed by the Dunnett post hoc test for multiple comparisons with one control group. A p value < 0.05 was considered statistically significant.
Human eyes ( n = 6, 3 males and 3 females, 65–76-years-old) were obtained from the Centre Hospitalier Universitaire de Québec (Canada), following informed consent from the donor’s next of kin, and were used in accordance with a protocol approved by the ethic board of the RI-MUHC (#2019–5314) and with The Code of Ethics of the World Medical Association. Primary RPE cell cultures were established as reported previously . In all experiments, cells were used between the second and fourth passages (exponential growth phase and presence of cytoplasmic pigmented granules) ( ) . ARPE-19 cell line was obtained from Cedarlane (ON., Canada) and was maintained in DMEM-F12 medium supplemented with 10% FBS and antibiotics (Corning, AZ, USA). These cells were used for all experiments at early passages (˂20). As reported previously, ARPE-19 cells were loaded with A2E (20 µM) 24 h before exposure to light .
Cells were exposed to BL when they reached 70% confluence. Cultures were maintained in the dark wrapped in aluminum foil at 37 °C and 5% CO 2 . Cell culture medium was removed, replaced with D-PBS supplemented with calcium and magnesium, and cells were exposed under a solar simulator (TSS-156R, OAI, OAInet, Milpitas, CA, USA) set at 30 mW/cm 2 for 30 min in the presence or absence of NAC. NAC (1 mM; Sigma-Aldrich, St. Louis, MO, USA) was added to cells 24 h before exposure to BL and during BL exposure. A Blue Dichroic Filter (Edmund Optics Inc., Bengaluru, India) was used to allow only BL to pass and reach cells ( ).
We analyzed both total cellular ROS and mitochondrial superoxide anions using DCF-DA and MitoSox Red probes, respectively, according to manufacturer protocols (ThermoFisher, Waltham, MA, USA). Fluorescence was read using an Infinite M200 Pro microplate reader (Tecan, Mennedorf, Switzerland). Reading parameters were introduced manually to normalize fluorescence measurements between experiments.
For cell cycle analyses, cells were fixed in ice-cold ethanol (70%) for 2 h and labeled with propidium iodide (PI (50 μg/mL); Sigma Aldrich, St. Louis, MO, USA). Cells were acquired in a BD FACSCanto II flow cytometer at ~400 events/second flow rate. Doublets were excluded by creating a combination of FSC-channel bivariate plots using Area vs. Height parameters. For apoptosis analysis, we used the Alexa Fluor 488 Annexin V/Dead Cell Kit (ThermoFisher, Waltham, MA, USA) following the manufacturer’s instructions. ~20.000 cells were acquired per sample at ~500 events/second rate. Analyses were performed using FlowJo software (version 10.10) .
M ) Measurement ΔΨ M was assessed using the JC-1 probe according to the manufacturer’s instructions (Cayman, MI, USA). Fluorescence was read using the Infinite M200 Pro microplate reader. Two measures were performed at Ex/Em (535/595 nm and 485/535 nm) for red J-aggregates and green monomers, respectively. Data were presented as ratio of J-aggregates to monomer values.
For MS analyses, cell samples were resuspended in PBS. Cell preparations for Western Blot were homogenized in RIPA containing protease inhibitors (Sigma Aldrich, St. Louis, MO, USA) at 4°C for 30 min. For immunoblotting, proteins were resolved in precast polyacrylamide gel and transferred to PVDF membranes (Bio-Rad, Hercules, CA, USA). Membranes were probed with rabbit anti-caspase 9 (cleaved Asp353) (ThermoFisher, Waltham, MA, USA) and mouse anti-β-actin (Sigma Aldrich, St. Louis, MO, USA) antibodies, followed by HRP-conjugated goat anti-rabbit and goat anti-mouse antibodies (Sigma Aldrich, Waltham, MA, USA). Protein signals were visualized using ECL prime Western Blot detection (Sigma Aldrich, St. Louis, MO, USA) in a ChemiDoc System (BioRad, Hercules, CA, USA). Densitometric analysis was performed using ImageJ software (version 1.54g). Liquid chromatography-tandem mass spectrometry proteomic analyses were performed on protein samples as previously described . Raw data were converted into *.mgf format (Mascot generic format) to use the Mascot2.6.2 search engine to search against human protein sequences (Uniprot 2019). Database search results were loaded onto Scaffold 4.10.0 for spectral counting, statistical treatment, data visualization and quantification. Samples with low total protein counts and low spectrum counts were excluded from the analyses. A p -value cut-off of 0.05 and a fold-value change of ≥2 were used to identify the differentially expressed proteins. The identified protein list in Scaffold was exported to Microsoft Excel sheets and uploaded into the DAVID Bioinformatics database (v2023q4) for the analysis of functional gene enrichment and annotation (gene ontology analyses). In addition, bioinformatic analyses were performed using the FunRich software (version 3.1.3).
Caspase pathway activation was analyzed using the CellEvent Caspase-3/7 probe (ThermoFisher, Waltham, MA, USA) as per the manufacturer’s protocol. Following staining, cells were mounted with coverslips in Mounting Medium with DAPI (Vectorlabs, MA, USA) and visualized using an LSM780 confocal microscope (Zeiss, Jena, Germany).
All experiments were performed with 6 independent primary RPE cell cultures or at least 3 independent ARPE-19 cell cultures. Data were compared using an ANOVA followed by the Dunnett post hoc test for multiple comparisons with one control group. A p value < 0.05 was considered statistically significant.
BL exposure elicits oxidative stress in RPE cells that triggers mitochondrial damage and cell apoptosis. Quenching ROS produced following BL exposure provides protective effects to these cells. Proposed strategies for counteracting the deleterious effects of BL boil down to blocking these radiations or targeting their downstream cellular effects. Overall, our findings give a rationale for the use of multiple strategies to prevent the eye’s posterior segment and mainly the RPE layer from BL deleterious effects.
|
Implementation and Clinical Adoption of Precision Oncology Workflows Across a Healthcare Network | c4e361db-2874-4dd0-8357-22d2b23f8961 | 9632318 | Internal Medicine[mh] | Cancer genotyping has become part of the standard of care in oncology. , Treatment options evolve, and numerous medical societies and professional organizations govern the process of incorporating new findings into updated guidelines. For example, the NCCN guidelines continue to evolve over time and recommend combinations of tests for a series of tumors (e.g., PD-L1 + EGFR + ALK + ROS1 + RET + MET exon 14 skipping + KRAS G12C + BRAF in advanced-stage non-small cell lung cancer). , New regulatory paradigms have resulted in increased efficiency of the Food and Drug Administration (FDA) review process. , As a result, the number of authorized therapies that rely on biomarkers (i.e., companion diagnostics) has increased sharply in recent decades and currently comprises over 70 approvals ( ). Concomitantly, the number of oncology clinical trials is growing, but many protocols fail to complete because they do not meet their enrollment targets, at least in part due to lack of broad-scale testing. More importantly, patients may miss out on a potentially beneficial clinical trial therapy because they were not tested broadly for certain biomarkers. , In addition, racial and socio-economic disparities affect the quality of cancer care for many patients, compromising access to biomarker testing and clinical trial enrollment. Despite substantial progress, realizing and maintaining precision oncology continues to rely on performing the right tests for the right patient at the right time. Healthcare networks promise improved care by harmonizing access to experts and medical services across the system while simultaneously reducing redundant administrative overhead. The realization of this promise requires careful coordination and consideration of social determinants to identify gaps and provide equity-oriented care. , In the context of precision oncology, this coordination entails the integration of various molecular tests. Importantly, the complexity and cost-coverage of these tests are associated with significant administrative hurdles (e.g., prior authorization, sample selection, result interpretation, and denial management). Overcoming these administrative challenges while accounting for progress in the field requires novel approaches. There are several commercial solutions including streamlining of test orders and remote testing, prior authorization, and trial matching. However, these solutions rely heavily on local pathology and/or information technology (IT) services that include documentation, data sharing, and frequent status updates. , Furthermore, the value-proposition of these additional cost components (overhead) relies largely on longer-term outcome measures that require separate initiatives and efforts for identification and tracking. To our knowledge, a straightforward and cost-efficient approach to communicate harmonized molecular test combinations, for realizing precision oncology across sites, has not been established. Here we report the implementation and clinical adoption of precision oncology workflows across a healthcare network. This included the development, roll-out, and continuous improvements of molecular order sets for appropriate test selection by the disease center. We examined adoption patterns across all gastrointestinal malignancies over a two-year period. The frequency of new discoveries in precision oncology and the complexity of molecular test combinations underscore the urgent need for efficient and continuously updated clinical decision support mechanisms. A cost-cognizant communication tool across disease centers is an essential component for the effective delivery of precision oncology to cancer patients.
Project Design and Setting The project was designed as a prospective quality improvement initiative including a retrospective chart review. The prospective component of the project aimed to optimize clinical test order practice and did not require formal review or approval by the institutional review board (IRB, Human Research Committee, version 25th May 2012). IRB approval was obtained for retrospective chart review (IRB 2008-P-002165). Cancer care was coordinated as a subspecialized tertiary care practice that includes inpatient, outpatient, and a network of community-based sites. All laboratory tests were performed in CLIA-certified laboratories. The molecular laboratory offered 41 high-complexity in vitro diagnostic tests and for certain tests (e.g., HER2, PD-L1, and MMR), immunohistochemistry (IHC)-based assays were included. Prior authorization, consent management, cost estimation, and appeal workflows were managed by the lab, in close coordination with various hospital-based groups. Order Sets and Definitions Order set refers to a combination of recommended tests by disease type and setting (e.g., tumor type, stage, and grade). Order sets were designed by an organ- or disease-center-specific working group composed of molecular pathologists, medical oncologists, and subspecialty experts from surgical pathology. The working groups considered FDA-approved agents with companion diagnostic designation as well as relevant biomarkers mentioned in professional guidelines (e.g., NCCN, and cIMPACT). , The order set design also accounted for recent tumor-agnostic FDA approvals for immune checkpoint inhibitors (ICI). Based on the package insert for each agent and disease setting ( illustrates the analysis for pembrolizumab), we added the relevant biomarker(s) including PD-L1 (IHC), MMR (IHC), and tumor mutational burden status (TMB, provided by the NGS-based mutational analysis/snapshot assay). Furthermore, we included biomarkers with emerging evidence (i.e., peer-reviewed evidence) after considering applicable governmental and private payor policies. We excluded all research-based biomarker testing. In disease settings for which more than one biomarker-guided therapy may be relevant (FDA-approval or emerging evidence), comprehensive testing using next generation sequencing-based multi-gene panels is recommended (refer to for more information on the cancer gene panels). For anatomic figures in order sets we used BioRender.com . For the design of each order set, we tracked the start date, number of and hours in meetings, first approved version date, and go-live date. We also compared the rate of order set adoption after rollout, by the site. Data Analysis For companion diagnostics review ( ) we performed ongoing monthly checks of FDA announcements and pulled data from the FDA’s medical device database. We extracted names of the companion diagnostic tests, biomarker, test-type, cancer type (if applicable), submission and approval date, as well as the relevant treatment (status 3/22/2021; ). For clinical impact analysis, we used the first (gastrointestinal) order set and compared the year before (2/1/2017–1/31/2018) to the year after roll-out (2/1/2018–1/31/2019). We pulled data from our laboratory information system using a customized Python script using the pandas, NumPy, and Matplotlib libraries. For every ordered test we assigned a “recommended” vs. “non- recommended” label based on whether a specific test was part of the order set that the multidisciplinary panelists had agreed upon for that disease and stage, or not, respectively. We defined clinical adoption as the total number (or fraction) of recommended orders. Notably, the analysis was restricted to requests submitted to our molecular laboratory. We compared the total number of tests, the average number of tests per order, and the number of tests by primary site. Test results were assigned either a “normal” or “abnormal” status. All variants were classified following consensus recommendation and for all “abnormal” results we assigned one of three labels: non-actionable finding, potentially actionable finding not used for patient management, and actionable findings resulting in targeted treatment, including clinical trial enrollment ( ); “actionable” indicates an association between the molecular diagnostic finding and sensitivity (or resistance) to treatment with a specific FDA-authorized drug (either for that tumor, or for another cancer indication), or the possibility of enrollment in a clinical trial, specific to that tumor and molecular alteration. The assumption was that the implementation of molecular order sets would reduce the number of non-recommended tests. However, to formally investigate unintentional harm due to discouragement of ordering certain tests, we performed a chart review of all non-recommended orders and (after exclusion of redundant, duplicate, screening, or confirmatory tests; ) and compared the number of actionable alterations before and after roll-out ( ). Data were analyzed using Prism 9 (GraphPad Software Inc., San Diego, CA, USA) and Microsoft Excel for Mac V16.48 (Microsoft Corp., Redmond WA, USA) and we considered P values < .05 as indicative of a statistically significant difference ( ).
The project was designed as a prospective quality improvement initiative including a retrospective chart review. The prospective component of the project aimed to optimize clinical test order practice and did not require formal review or approval by the institutional review board (IRB, Human Research Committee, version 25th May 2012). IRB approval was obtained for retrospective chart review (IRB 2008-P-002165). Cancer care was coordinated as a subspecialized tertiary care practice that includes inpatient, outpatient, and a network of community-based sites. All laboratory tests were performed in CLIA-certified laboratories. The molecular laboratory offered 41 high-complexity in vitro diagnostic tests and for certain tests (e.g., HER2, PD-L1, and MMR), immunohistochemistry (IHC)-based assays were included. Prior authorization, consent management, cost estimation, and appeal workflows were managed by the lab, in close coordination with various hospital-based groups.
Order set refers to a combination of recommended tests by disease type and setting (e.g., tumor type, stage, and grade). Order sets were designed by an organ- or disease-center-specific working group composed of molecular pathologists, medical oncologists, and subspecialty experts from surgical pathology. The working groups considered FDA-approved agents with companion diagnostic designation as well as relevant biomarkers mentioned in professional guidelines (e.g., NCCN, and cIMPACT). , The order set design also accounted for recent tumor-agnostic FDA approvals for immune checkpoint inhibitors (ICI). Based on the package insert for each agent and disease setting ( illustrates the analysis for pembrolizumab), we added the relevant biomarker(s) including PD-L1 (IHC), MMR (IHC), and tumor mutational burden status (TMB, provided by the NGS-based mutational analysis/snapshot assay). Furthermore, we included biomarkers with emerging evidence (i.e., peer-reviewed evidence) after considering applicable governmental and private payor policies. We excluded all research-based biomarker testing. In disease settings for which more than one biomarker-guided therapy may be relevant (FDA-approval or emerging evidence), comprehensive testing using next generation sequencing-based multi-gene panels is recommended (refer to for more information on the cancer gene panels). For anatomic figures in order sets we used BioRender.com . For the design of each order set, we tracked the start date, number of and hours in meetings, first approved version date, and go-live date. We also compared the rate of order set adoption after rollout, by the site.
For companion diagnostics review ( ) we performed ongoing monthly checks of FDA announcements and pulled data from the FDA’s medical device database. We extracted names of the companion diagnostic tests, biomarker, test-type, cancer type (if applicable), submission and approval date, as well as the relevant treatment (status 3/22/2021; ). For clinical impact analysis, we used the first (gastrointestinal) order set and compared the year before (2/1/2017–1/31/2018) to the year after roll-out (2/1/2018–1/31/2019). We pulled data from our laboratory information system using a customized Python script using the pandas, NumPy, and Matplotlib libraries. For every ordered test we assigned a “recommended” vs. “non- recommended” label based on whether a specific test was part of the order set that the multidisciplinary panelists had agreed upon for that disease and stage, or not, respectively. We defined clinical adoption as the total number (or fraction) of recommended orders. Notably, the analysis was restricted to requests submitted to our molecular laboratory. We compared the total number of tests, the average number of tests per order, and the number of tests by primary site. Test results were assigned either a “normal” or “abnormal” status. All variants were classified following consensus recommendation and for all “abnormal” results we assigned one of three labels: non-actionable finding, potentially actionable finding not used for patient management, and actionable findings resulting in targeted treatment, including clinical trial enrollment ( ); “actionable” indicates an association between the molecular diagnostic finding and sensitivity (or resistance) to treatment with a specific FDA-authorized drug (either for that tumor, or for another cancer indication), or the possibility of enrollment in a clinical trial, specific to that tumor and molecular alteration. The assumption was that the implementation of molecular order sets would reduce the number of non-recommended tests. However, to formally investigate unintentional harm due to discouragement of ordering certain tests, we performed a chart review of all non-recommended orders and (after exclusion of redundant, duplicate, screening, or confirmatory tests; ) and compared the number of actionable alterations before and after roll-out ( ). Data were analyzed using Prism 9 (GraphPad Software Inc., San Diego, CA, USA) and Microsoft Excel for Mac V16.48 (Microsoft Corp., Redmond WA, USA) and we considered P values < .05 as indicative of a statistically significant difference ( ).
Realizing Precision Oncology in a Modern Healthcare Network To improve access to precision oncology in a complex healthcare system ( ), a basic requirement for effective patient management is the standardization of molecular testing recommendations across affiliated sites. To address test order harmonization, we optimized molecular ordering for gastrointestinal (GI) malignancies. In a collaboration between molecular pathology, gastrointestinal oncology, and surgical pathology, we created a “GI Molecular Order Set” ( ). The order sets include anatomic schemes that can be used for patient encounters and visual guidance. It outlines the most relevant clinical settings (i.e., primary site and stage) and lists the molecular tests indicated for each individual condition. The molecular order sets reflect our best practices and have evolved over time (current version: 2021.v3). Implementation of a Molecular Order Set for Common Gastrointestinal Cancers The first order set was officially launched in February 2018. At that point, when submitting a GI cancer order to the molecular diagnostics lab, clinicians could simply choose “Panel Testing,” to request the designated list of recommended molecular tests for each cancer type/stage. It is worth noting that panel testing was not enforced; it was offered as a simplified, “one-click,” efficient alternative to selecting individual tests, but clinicians still had the option to order tests of their choice. After the GI order set roll-out, we experienced changes in our total volumes ( ). By comparison, we noted a significant increase in the number of patients with GI malignancies referred to molecular testing (17%; P = .006), accompanied by significant increases in GI orders for testing (22%; P =.008) and in actual GI molecular tests (19%; P < .001) ( ). Clinical Adoption and Change in Test Order Practice To evaluate the impact of the GI order set, we compared the molecular requests submitted to our lab 1 year before and 1 year after roll-out and show that the initiative had an immediate impact on the fraction of recommended tests, which rose from 84% to 93% ( ). We consider the high baseline rate of compliance with recommended tests indicative of oncologist expertise, and the overall 9% increase in recommended test orders as a net positive impact of the initiative. Conversely, the effect on non-recommended orders was a reduction from 16% to 7% ( ). The observed shift in order practice was statistically significant ( P < .001), and consistent with clinical adoption of the order sets. Although assessment of the adoption across specific network sites was not the primary goal of the initiative, examination by site showed adoption of the order sets in at least 7 network sites ( ). Specifically, the fraction of recommended orders was significantly higher in the network (99%) when compared to orders from providers primarily practicing at the main campus (92%; P = .005). To improve patient outcomes across the network, the implementation of best practice testing recommendations should, ideally, increase actionable findings and broaden therapeutic options. While the assessment of patient management decisions and clinical trial enrollments fall outside the scope of this report, we examined the rates of “abnormal” test results, as a surrogate for potentially relevant findings. In our cohort, 39% of recommended tests yielded an “abnormal” test result (i.e., any positive finding), while only 11% of non-recommended tests were “abnormal” ( ). The overall fraction of “abnormal” findings was significantly higher in the tests recommended by the guidelines ( P < .001), suggesting that the order set design appropriately enriched for potentially relevant molecular assays that consequently uncovered clinically significant findings. After order set roll-out, we observed a 4% rise in “abnormal” findings for recommended tests and a 2% rise for “abnormal” findings in non-recommended tests ( ). While we cannot prove that this shift was due to the order sets, there was a combined 6% increase in “abnormal” results ( P = .002). No Significant Impact in the Administration of Molecularly Matched Therapies Driven by Non-Recommended Orders Unintended consequences of utilization management are of key concern. Specifically, a utilization management strategy may result in unintended harm due to the discouragement of providers to order certain tests and subsequent failure to identify actionable results. We therefore performed a full chart review of all non-recommended tests, to look for potentially actionable findings, and screened for molecularly informed treatments initiated by non-recommended test results. We mapped the distribution of non-recommended tests according to cancer type, assay type, actionability, and treatment decisions ( ). Over the two-year period, 9% of non-recommended tests yielded potentially actionable results ( n = 18 before and n = 12 after; ). When considering all molecular requests ( n = 1323 before, n = 1580 after, shaded boxes in ), the fraction of non-recommended tests with actionable findings dropped by 0.6% (from 1.4% to 0.8%). Within the subset of non-recommended tests ( n = 207 before, n = 113 after, ), the proportion of actionable findings increased by 2% (from 9% to 11%; ; P = .55). Notably, a total of 3 patients ( n = 1 before, n = 2 after) received treatment based on actionable findings detected by non-recommended tests ( ). Specifically, one patient received an off-label prescription, and two patients enrolled in clinical trials. We thereby confirmed that the fraction of non-recommended tests with actionable findings that resulted in patient treatment was not significantly affected by order set roll-out ( P = .55; ). It increased by 0.06% (from 0.07% to 0.13%) when considering all tests, and by 1.3% (from 0.5% to 1.8 %; ) within the subset of non-recommended tests. A Multidisciplinary Approach to Develop Best Order Practices for Precision Oncology Based on our experience with the GI order set, we extended the approach and developed molecular order sets for all major cancer indications. The multidisciplinary design followed the same approach as outlined for GI ( ) and was conducted in two sprints: order sets for lung, breast, and GU cancers (released in March 2020), and the remaining indications (launched in December 2020; ). We noticed differences in the amount of time required for each order set (range 3–40 h). For example, the order set for neuro-oncology (10 meetings, ~40 h) had to account for the integrated diagnostic paradigm of morphology and molecular findings. Overall, the development of the 12 order sets took approximately 9 months, requiring 0.2 full-time equivalent molecular faculty support, and 42 roundtable discussions. The order sets are provided online ( ), and the latest version may be retrieved by sending a blank email to: [email protected] ( ).
To improve access to precision oncology in a complex healthcare system ( ), a basic requirement for effective patient management is the standardization of molecular testing recommendations across affiliated sites. To address test order harmonization, we optimized molecular ordering for gastrointestinal (GI) malignancies. In a collaboration between molecular pathology, gastrointestinal oncology, and surgical pathology, we created a “GI Molecular Order Set” ( ). The order sets include anatomic schemes that can be used for patient encounters and visual guidance. It outlines the most relevant clinical settings (i.e., primary site and stage) and lists the molecular tests indicated for each individual condition. The molecular order sets reflect our best practices and have evolved over time (current version: 2021.v3).
The first order set was officially launched in February 2018. At that point, when submitting a GI cancer order to the molecular diagnostics lab, clinicians could simply choose “Panel Testing,” to request the designated list of recommended molecular tests for each cancer type/stage. It is worth noting that panel testing was not enforced; it was offered as a simplified, “one-click,” efficient alternative to selecting individual tests, but clinicians still had the option to order tests of their choice. After the GI order set roll-out, we experienced changes in our total volumes ( ). By comparison, we noted a significant increase in the number of patients with GI malignancies referred to molecular testing (17%; P = .006), accompanied by significant increases in GI orders for testing (22%; P =.008) and in actual GI molecular tests (19%; P < .001) ( ).
To evaluate the impact of the GI order set, we compared the molecular requests submitted to our lab 1 year before and 1 year after roll-out and show that the initiative had an immediate impact on the fraction of recommended tests, which rose from 84% to 93% ( ). We consider the high baseline rate of compliance with recommended tests indicative of oncologist expertise, and the overall 9% increase in recommended test orders as a net positive impact of the initiative. Conversely, the effect on non-recommended orders was a reduction from 16% to 7% ( ). The observed shift in order practice was statistically significant ( P < .001), and consistent with clinical adoption of the order sets. Although assessment of the adoption across specific network sites was not the primary goal of the initiative, examination by site showed adoption of the order sets in at least 7 network sites ( ). Specifically, the fraction of recommended orders was significantly higher in the network (99%) when compared to orders from providers primarily practicing at the main campus (92%; P = .005). To improve patient outcomes across the network, the implementation of best practice testing recommendations should, ideally, increase actionable findings and broaden therapeutic options. While the assessment of patient management decisions and clinical trial enrollments fall outside the scope of this report, we examined the rates of “abnormal” test results, as a surrogate for potentially relevant findings. In our cohort, 39% of recommended tests yielded an “abnormal” test result (i.e., any positive finding), while only 11% of non-recommended tests were “abnormal” ( ). The overall fraction of “abnormal” findings was significantly higher in the tests recommended by the guidelines ( P < .001), suggesting that the order set design appropriately enriched for potentially relevant molecular assays that consequently uncovered clinically significant findings. After order set roll-out, we observed a 4% rise in “abnormal” findings for recommended tests and a 2% rise for “abnormal” findings in non-recommended tests ( ). While we cannot prove that this shift was due to the order sets, there was a combined 6% increase in “abnormal” results ( P = .002).
Unintended consequences of utilization management are of key concern. Specifically, a utilization management strategy may result in unintended harm due to the discouragement of providers to order certain tests and subsequent failure to identify actionable results. We therefore performed a full chart review of all non-recommended tests, to look for potentially actionable findings, and screened for molecularly informed treatments initiated by non-recommended test results. We mapped the distribution of non-recommended tests according to cancer type, assay type, actionability, and treatment decisions ( ). Over the two-year period, 9% of non-recommended tests yielded potentially actionable results ( n = 18 before and n = 12 after; ). When considering all molecular requests ( n = 1323 before, n = 1580 after, shaded boxes in ), the fraction of non-recommended tests with actionable findings dropped by 0.6% (from 1.4% to 0.8%). Within the subset of non-recommended tests ( n = 207 before, n = 113 after, ), the proportion of actionable findings increased by 2% (from 9% to 11%; ; P = .55). Notably, a total of 3 patients ( n = 1 before, n = 2 after) received treatment based on actionable findings detected by non-recommended tests ( ). Specifically, one patient received an off-label prescription, and two patients enrolled in clinical trials. We thereby confirmed that the fraction of non-recommended tests with actionable findings that resulted in patient treatment was not significantly affected by order set roll-out ( P = .55; ). It increased by 0.06% (from 0.07% to 0.13%) when considering all tests, and by 1.3% (from 0.5% to 1.8 %; ) within the subset of non-recommended tests.
Based on our experience with the GI order set, we extended the approach and developed molecular order sets for all major cancer indications. The multidisciplinary design followed the same approach as outlined for GI ( ) and was conducted in two sprints: order sets for lung, breast, and GU cancers (released in March 2020), and the remaining indications (launched in December 2020; ). We noticed differences in the amount of time required for each order set (range 3–40 h). For example, the order set for neuro-oncology (10 meetings, ~40 h) had to account for the integrated diagnostic paradigm of morphology and molecular findings. Overall, the development of the 12 order sets took approximately 9 months, requiring 0.2 full-time equivalent molecular faculty support, and 42 roundtable discussions. The order sets are provided online ( ), and the latest version may be retrieved by sending a blank email to: [email protected] ( ).
Here we report the clinical adoption of a low-cost approach to harmonize test orders in precision oncology. Precision oncology relies on more than genotyping and next-generation sequencing (NGS) panels. We devised an approach that harnesses the interdisciplinary domain-expertise into a streamlined clinical-decision support tool for molecular test orders. One advantage of our approach to capture the existing expertise is cost-effectiveness and we were not restricted by costly demands, such as hiring new personnel or investing in expensive software solutions. We present a comparison of two 12-months periods, before and after order set implementation across all gastrointestinal malignancies, and we provide the order sets for several disease centers. Importantly, we demonstrate that the approach does not reduce the number of relevant findings in a subset of 9 gastrointestinal cancers. By sharing our multi-year, order optimization initiative, we aim to encourage other healthcare networks to streamline precision oncology orders. Realizing precision oncology relies on results from more than one test—even more than one panel. Even the most comprehensive genotyping approaches cannot deliver the PD-L1 protein expression status, the MGMT promoter methylation status in brain tumors, , the ER/PR/HER2/Ki-67 labeling index in breast cancer, , or cover interdisciplinary Lynch syndrome protocols using mismatch repair protein staining. , These are some examples that underscore that precision oncology requires a disease-specific testing approach to obtain the status for all relevant biomarkers—depending on the setting (FDA intended use). Molecularly informed therapies have been associated with improved survival in patients with advanced cancer , ; and improving the availability and timing of pertinent information to assess relevant treatment options has been a central element of numerous initiatives. , Furthermore, precision oncology relies on the ongoing incorporation of newly approved treatments, NCCN guideline updates, payor policies, and overcoming administrative hurdles ( ). Despite systematic attempts, , for individual patients, providers face several challenges. For example, access to relevant (molecular-genetic) domain knowledge can be limited, , access to so-called best practices remains highly variable, documentation is challenging, and numerous computational solutions have been proposed. , Several strategies are integral to practicing precision oncology. These include tumor boards, , interdisciplinary consultations, and trial matching—which can, depending on the trial portfolio cause substantial challenges. , These various strategies rely in part on molecular test results and here we focused on harmonizing the ordering process for several reasons. First, all patients with advanced cancer and medically relevant indications should be offered testing, regardless of tumor type. The concerted efforts of the NCI-MATCH master trial showed that comprehensive testing across tumor histologies can identify a significant number of actionable alterations and led to successful patient accrual to 30 treatment subprotocols, 11 of which reached their accrual goals. Second, testing algorithms are complex and depend on tumor stage and type ( ), and recent national data stress the need to guide and educate physicians and patients in the field of genomic profiling. Third, there is an unrecognized complexity in aligning the various payor policies and guideline recommendations with local test availability. The presented order sets can establish local best practices to harmonize uniform patient access (i.e., access to the right tests for the right patients). However, the realization of the order sets differed in complexity ( ). For example, in neuro-oncology many molecular tests are performed to reach the final diagnosis (i.e., diagnostic biomarkers). Other sets contain a mixture of predictive and diagnostic biomarker considerations (e.g., to distinguish benign from malignant soft tissue tumors, or identify the presence of an EGFR activating mutation in a poorly differentiated tumor with equivocal immunophenotype). The creation and implementation of the order sets created a platform that aligns different biomarker functions. We caution that capturing the vantage points and domain expertise of diverse stakeholders ultimately relies on collaborative synergy. The limitations of our study are primarily related to our approach. Order set dissemination was done based on the list of providers, and we allowed a combination of EMR- and non-EMR-based order entries, but we could not be certain we reached all providers. This was a deliberate choice to perform short iterative improvements and focus on sustained inter-connectedness of colleagues with domain knowledge from oncology and pathology. Despite the rather simple dissemination, the content was co-created, and the higher rate of recommended orders in network sites can be taken as evidence that the oncologists used the order sets. While we cannot prove causality, the providers were not aware of being observed when ordering. We did not enforce the use and providers continued to be able to order a-la-carte tests and we did not cancel non-recommended orders (i.e., maintained provider preference). Thus, our findings likely reflect the unbiased order practice when offered the local best practice vs. relying on personal experience. A second limitation is that we focused on one organ system (GI) for this analysis. While our GI order set covered 9 different malignancies and 10 different high-complexity assays, it does not cover the entire cohort of GI malignancies. Third, our analysis was limited to the orders that were received by our laboratory. Therefore, we were unable to account for missed tests (i.e., tests that were not ordered, but that should have been run based on the GI order set recommendations). Analysis of non-recommended tests with actionable findings revealed that most requests (15/17) consisted of FISH assays for samples that also underwent (recommended) NGS testing ( ). One reason for these orders is that NGS has lower sensitivity to detect copy number gains especially when the sample has low tumor purity. For example, most commercial labs have a 20% tumor cutoff for molecular assays. Fourth, circulating tumor DNA (ctDNA) analysis was not systematically performed for GI malignancies during the time frame selected for analysis (before 2019). However, our molecular test order sets include ctDNA recommendations for lung, breast, and thyroid to assess disease progression on therapy and/or when obtaining a tissue biopsy is clinically contraindicated ( ). Fifth, we did not perform a detailed financial analysis accounting for the various payors, prior authorization changes, the evolution of payor policies, and professional guidelines over time. We did, however, adjust for volume increases in the laboratory and we consider the shift towards recommended tests with the elimination of non-recommended tests as optimized (net-neutrality; ). However, the appropriate metrics for measuring success are unclear. , There is a substantial attrition rate from the initial order to the result, to the identification of the appropriate indication (for FDA-approved agents) or to identifying eligibility for a clinical trial, and finally, the actual delivery of the agent to the patient ( ). Treatment decisions depend on the overall clinical picture and on the correct interpretation of the molecular results. Patient management strategies are discussed at subspecialty-specific weekly tumor boards, and unusual cases of general interest are discussed at a weekly consensus meeting and at a monthly molecular tumor board conference. These longstanding conferences include providers across the network and have remained essentially unchanged over the time frame covered by our study ( ). Overcoming practical and administrative hurdles while avoiding disparities and achieving consistent access for all patients requires alignment of seemingly discrepant workflow elements ( ). We consider harmonizing test order practice an essential element of realizing precision oncology. The next steps in our precision diagnostics program include three specific ongoing projects. First, we aim to incorporate the ability for e-consultations to serve as a point of contact for network sites interested in exploring reasons for, or against, certain tests. The service line, which was rolled out in February 2022, entails pathology “curbside” consultations in our electronic medical record—and for more comprehensive e-consultations we are using the recently revised clinical-pathological consultation billing framework. , Second, we will expand the distribution of our molecular order sets. We consider harmonization of test order practice a key element of equitable care, with uniform access to all, including minorities and those living in rural areas. , Third, we plan to incorporate our real-time clinical trial landscape so that it will be available for all providers. In summary, the implementation of precision oncology workflows relies on numerous elements. Here, we reported a cost-efficient strategy to align test order practices across multiple stakeholders. Standardized test order practice and access to continuously updated domain knowledge are an essential part of precision diagnostic laboratories’ value proposition in an integrated healthcare network.
oyac134_suppl_Supplementary_Information_1 Click here for additional data file. oyac134_suppl_Supplementary_Information_2 Click here for additional data file.
|
Relationship between | cf612f0b-1dec-4c60-8c62-52f72e79ccc4 | 7892547 | Pharmacology[mh] | Cytochrome P450 (P450) 2D6 is a major drug-metabolizing enzyme expressed in the liver . CYP2D6 catalyzes the hepatic metabolism of a large number of clinically important medications, including codeine, amitriptyline, fluvoxamine, risperidone, fluoxetine, aripiprazole, paroxetine, and dextromethorphan , . The CYP2D6 gene is highly polymorphic. To date, over 130 allelic variants have been designated by the Pharmacogene Variation Consortium (PharmVar) , . CYP2D6 allele frequencies vary substantially among different ethnic and ancestral populations – . The decreased function CYP2D6*10 allele (100C > T, P34S) is the most common allele in East Asian populations, including Thai, Chinese, Taiwanese, Korean, Vietnamese, and Filipino – . This allele is also observed in other populations, including Europeans, Africans, and their descendants, its frequency, however, considerably lower . Conversely, the nonfunctional CYP2D6*4 allele is more frequent in European populations but is rarely observed in Asian populations . CYP2D6 genetic variation leads to a wide range of metabolic capacity ranging from no to increased activity. Based on their genotype, individuals are grouped into four phenotype groups, i.e., poor metabolizers (PMs), intermediate metabolizers (IMs), normal metabolizers (NMs), and ultrarapid metabolizers (UMs) . The activity score system (AS) has been broadly accepted to translate the CYP2D6 genotype into phenotype and the Clinical Pharmacogenetics Implementation Consortium (CPIC) and the Dutch Pharmacogenetics Working Group (DPWG) for their respective guidelines , . Briefly, each allele is assigned a value of 0, 0.5 or 1 reflecting no function, decreased or normal function, and the sum of the values provides the AS of a genotype. The previous CPIC translation method classified AS = 0 as PM, AS = 0.5 as IM, AS = 1 to 2 as NM, and > 2 as UM. In an effort to harmonize genotype to phenotype translation, a CPIC-led working group has recently published a revised method and recommends using this new method to translate genotype to phenotype . One major change was downgrading the value used for activity score calculation of the decreased function CYP2D6*10 allele from 0.5 to 0.25 to more accurately reflect the dramatically decreased function of this allele. Furthermore, an AS of 1 is no longer categorized as NM, but as IM. While the new system has recently been applied to an in vitro study comprising mostly Caucasian liver tissue samples , there are no investigations to date assessing the performance of the new method on any Asian populations with high frequencies of CYP2D6*10 . There is also a paucity of information regarding the impact of substrate specificity on performance of the new translation method. The use of a standardized method to infer phenotype from genotype is essential for test reporting and clinical implementation to prevent confusion and inconsistencies. We applied the new CPIC-recommended method to data obtained from risperidone (RIS)-treated Thai children and adolescents diagnosed with autism spectrum disorders (ASDs) and treated with RIS. Since the impact of CYP2D6 genotype on plasma concentrations of RIS is well-established – , RIS is a well-suited drug to evaluate whether the new translation method is superior over the previous method. The aims of this investigation were to demonstrate whether the revised value for CYP2D6*10 indeed improves the relationship between AS and RIS plasma drug levels and to assess whether phenotype groupings, as recommended by CPIC, are appropriate for RIS.
Patients One hundred and ninety-nine participants with ASD, aged 3–18 years, and diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) criteria in the Yuwaprasart Waithayopathum Child Psychiatric Hospital, Samut Prakan, Thailand, were recruited during 2017–2018. All patients were treated with a RIS-based regimen for at least four weeks before blood sample collection. Socio-demographic data were collected by a questionnaire including gender, age at assessment, daily RIS dosage, duration of RIS treatment, and concomitant medication. Patients were excluded if they were receiving concomitant treatments that could potentially affect RIS metabolism. This study was approved by the Ethics Review Committee on Human Research of the Faculty of Medicine Ramathibodi Hospital, Mahidol University, Thailand (MURA2017/556) and conducted in accordance with the Declaration of Helsinki. The study protocol was clearly explained to all participants and/or their legal guardians, and informed consent was given before the study. Genotyping methods Genomic DNA was extracted from EDTA blood with the MagNa Pure automated extraction system according to the manufacturer's instructions. A bead array platform genotyped CYP2D6 based on allele-specific primer extension (ASPE) and hybridization to oligonucleotide bound microspheres using the Luminex xTAG CYP2D6 Kit v3 (Luminex Corporation, Austin, TX, USA) according to the manufacturer's instructions . The assay interrogates 21 variants including 19 CYP2D6 single nucleotide polymorphisms (SNPs): − 1584C > G, 31G > A, 100C > T, 124G > A, 137_138insT, 882G > C, 1022C > T, 1660G > A, 1662G > C, 1708delT, 1759G > T, 1847G > A, 2550delA, 2616delAAG, 2851C > T, 2936A > C, 2989G > A, 3184G > A, and 4181G > C, as well as gene deletion and duplication) . The allelic variants called by this array are CYP2D6*1 (assigned in the absence of variants; default assignment), *2, *35 (normal function), *9, *10, *17, *29 and *41 (decreased function), and *3, * 4, *5, *6, *7, *8, *11 and *15 (no function), as well as the presence of duplications. Patients who were carriers of a CYP2D6 duplication were excluded, because this array did not further characterize gene duplications (i.e. copy number or which allele is affected by the duplication). For instance, a duplication observed in an individual genotyped as CYP2D6*1/*10 could result in e.g. a CYP2D6*1xN/*10, CYP2D6*1/*10xN or a *1/*36 + *10 genotype call. To calculate the AS, values were assigned to the alleles identified in the study cohort as follows: no function alleles ( *4, *5 ) = 0; the decreased function allele *10 = 0.25; other decreased function alleles ( *14, *41 ) = 0.5, and normal function alleles ( *1, *2, *35 ) = 1. The AS of each diplotype is the sum of the assigned value to each allele. Individuals with an AS of 0 were categorized as PMs, those with an AS of 0.25, 0.5 or 0.75 were categorized as IMs, and those with an AS of 1.25, 1.5, 1.75, or 2 were grouped as NMs. To compare translation methods, those with an AS of 1 were either categorized as IM (new CPIC method), or NM (previous CPIC method). Analytical drug assay/plasma concentrations Trough plasma concentration of RIS and its 9-OH-RIS metabolite were quantified, between 8:00 and 10:00 AM, approximately 12 h after the bedtime dose, using a validated, previously published high-performance liquid chromatography procedure . Briefly, we used an Agilent 1260 HPLC system (Agilent Technologies, CA, USA), which was connected to an AB Sciex API 3200 (Applied Biosystems, Foster City, CA, USA) instrument. Chromatographic separation was achieved on the C18 column (4.6 cm × 50 mm; 1.8 mm particle size). Integration of peak areas and determination of the concentrations was performed with the Analyst 1.5.2 software (Applied Biosystems, CA, USA). Quadratic regression with 1/ × weighted concentrations was used. The mean inter- and intra-assay accuracy for both RIS and 9-OH-RIS was set within ± 15.0% Relative Error of nominal, and precision < 15.0% Relative Standard Deviation. Statistical analysis Descriptive statistics were used to describe the clinical characteristics of the subjects. Data were expressed as mean (standard deviation, SD) or median (interquartile range, IQR) in normal or non-normal distribution data, respectively. The nonparametric Kruskal–Wallis (comparisons more than two groups) and Mann–Whitney U tests (comparisons between two groups) were used to assess the association between plasma drug levels and the studied genotypes or predicted phenotypes at each time point. Statistical analyses were carried out using SPSS v24 (SPSS Inc., Chicago, IL, USA) for Windows. Statistical significance is reported as P < 0.05 for a two-tailed distribution.
One hundred and ninety-nine participants with ASD, aged 3–18 years, and diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) criteria in the Yuwaprasart Waithayopathum Child Psychiatric Hospital, Samut Prakan, Thailand, were recruited during 2017–2018. All patients were treated with a RIS-based regimen for at least four weeks before blood sample collection. Socio-demographic data were collected by a questionnaire including gender, age at assessment, daily RIS dosage, duration of RIS treatment, and concomitant medication. Patients were excluded if they were receiving concomitant treatments that could potentially affect RIS metabolism. This study was approved by the Ethics Review Committee on Human Research of the Faculty of Medicine Ramathibodi Hospital, Mahidol University, Thailand (MURA2017/556) and conducted in accordance with the Declaration of Helsinki. The study protocol was clearly explained to all participants and/or their legal guardians, and informed consent was given before the study.
Genomic DNA was extracted from EDTA blood with the MagNa Pure automated extraction system according to the manufacturer's instructions. A bead array platform genotyped CYP2D6 based on allele-specific primer extension (ASPE) and hybridization to oligonucleotide bound microspheres using the Luminex xTAG CYP2D6 Kit v3 (Luminex Corporation, Austin, TX, USA) according to the manufacturer's instructions . The assay interrogates 21 variants including 19 CYP2D6 single nucleotide polymorphisms (SNPs): − 1584C > G, 31G > A, 100C > T, 124G > A, 137_138insT, 882G > C, 1022C > T, 1660G > A, 1662G > C, 1708delT, 1759G > T, 1847G > A, 2550delA, 2616delAAG, 2851C > T, 2936A > C, 2989G > A, 3184G > A, and 4181G > C, as well as gene deletion and duplication) . The allelic variants called by this array are CYP2D6*1 (assigned in the absence of variants; default assignment), *2, *35 (normal function), *9, *10, *17, *29 and *41 (decreased function), and *3, * 4, *5, *6, *7, *8, *11 and *15 (no function), as well as the presence of duplications. Patients who were carriers of a CYP2D6 duplication were excluded, because this array did not further characterize gene duplications (i.e. copy number or which allele is affected by the duplication). For instance, a duplication observed in an individual genotyped as CYP2D6*1/*10 could result in e.g. a CYP2D6*1xN/*10, CYP2D6*1/*10xN or a *1/*36 + *10 genotype call. To calculate the AS, values were assigned to the alleles identified in the study cohort as follows: no function alleles ( *4, *5 ) = 0; the decreased function allele *10 = 0.25; other decreased function alleles ( *14, *41 ) = 0.5, and normal function alleles ( *1, *2, *35 ) = 1. The AS of each diplotype is the sum of the assigned value to each allele. Individuals with an AS of 0 were categorized as PMs, those with an AS of 0.25, 0.5 or 0.75 were categorized as IMs, and those with an AS of 1.25, 1.5, 1.75, or 2 were grouped as NMs. To compare translation methods, those with an AS of 1 were either categorized as IM (new CPIC method), or NM (previous CPIC method).
Trough plasma concentration of RIS and its 9-OH-RIS metabolite were quantified, between 8:00 and 10:00 AM, approximately 12 h after the bedtime dose, using a validated, previously published high-performance liquid chromatography procedure . Briefly, we used an Agilent 1260 HPLC system (Agilent Technologies, CA, USA), which was connected to an AB Sciex API 3200 (Applied Biosystems, Foster City, CA, USA) instrument. Chromatographic separation was achieved on the C18 column (4.6 cm × 50 mm; 1.8 mm particle size). Integration of peak areas and determination of the concentrations was performed with the Analyst 1.5.2 software (Applied Biosystems, CA, USA). Quadratic regression with 1/ × weighted concentrations was used. The mean inter- and intra-assay accuracy for both RIS and 9-OH-RIS was set within ± 15.0% Relative Error of nominal, and precision < 15.0% Relative Standard Deviation.
Descriptive statistics were used to describe the clinical characteristics of the subjects. Data were expressed as mean (standard deviation, SD) or median (interquartile range, IQR) in normal or non-normal distribution data, respectively. The nonparametric Kruskal–Wallis (comparisons more than two groups) and Mann–Whitney U tests (comparisons between two groups) were used to assess the association between plasma drug levels and the studied genotypes or predicted phenotypes at each time point. Statistical analyses were carried out using SPSS v24 (SPSS Inc., Chicago, IL, USA) for Windows. Statistical significance is reported as P < 0.05 for a two-tailed distribution.
Demographic and clinical characteristics Our sample consisted of 199 children and adolescents with a mean age of 9.25 (SD; 3.93) years who had been diagnosed with autism spectrum disorders. Demographic data are presented in Table . Participants were treated with a RIS-based regimen. One hundred and eighteen patients (59.3%) received RIS monotherapy. The medications that were concomitantly prescribed to patients were methylphenidate, sodium valproic acid, benzhexol, topiramate, cetirizine, clonazepam, hypodine, phenytoin, and phenobarbital. There were no significant differences for RIS or 9-OH-RIS between children and adolescents. Most of which were male (174; 87.44%). There were also no significant differences for RIS or 9-OH-RIS between males and females nor those receiving monotherapy and polytherapy. Distribution of the CYP2D6 alleles and genotypes The CYP2D6*10 decreased function allele was the most common allele identified among the 199 subjects at 51.8%. The frequencies of the normal function alleles CYP2D6*1 and CYP2D6*2 were 25.1% and 6.3%, respectively. Another decreased function allele, CYP2D6*41 , was observed at 6.8%. CYP2D6*4 and CYP2D6*5 , both nonfunctional alleles, were found at frequencies of 1.3% and 8.3%, respectively. We also observed two subjects with the rare CYP2D6*14 allele (0.50%) in this study cohort. CYP2D6 allele frequencies are presented in Table . Of the 398 alleles, 125 were normal function (aggregate frequency of 31.4%) and were assigned a value of 1 to calculate the AS while 29 decreased function alleles (aggregate frequency of 7.3%) received a value of 0.5 and 38 no function alleles (aggregate frequency of 9.6%) received a value of 0. Genotype frequencies are summarized in Supplementary Table . Of the 20 CYP2D6 genotypes identified, CYP2D6*1/*10 was the most frequent (29.6%), followed by CYP2D6*10/*10 , CYP2D6*5/*10, and CYP2D6*10/*41 (26.1%, 7.5%, and 7.5%, respectively). Plasma levels and C/D of RIS, 9-OH-RIS, active moiety, and RIS/9-OH-RIS ratio in the different CYP2D6 AS groups The relationship between CYP2D6 AS, RIS plasma concentration, and the 9-OH-RIS metabolite was examined in 199 patients (Table ). Patients were divided into eight groups (AS of 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, and 2). The most common AS was 1.25 (35.18%), comprising CYP2D6*1/*10 and CYP2D6*2/*10 genotypes. There were significant differences in RIS, the metabolic ratio RIS/9-OH-RIS, and C/D of RIS plasma concentrations between AS of 0.25, 0.5, 0.75, and 1, 1.25, 1.5, 2. There was a significant difference between patients when divided into two groups, one with AS < 1 and the other with AS ≥ 1. Plasma levels of RIS and RIS/9-OH-RIS ratio, and plasma C/D of RIS in patients with AS < 1 were significantly higher than those in patients with AS ≥ 1 ( P value < 0.001 among three drug parameters) (Fig. A–C). When genotypes with an AS of 1 were categorized as IM, significance of RIS, RIS/9-OH-RIS ratio, and RIS C/D between AS of 1 and AS > 1 was considerably lower as reflected by a P value of 0.412, 0.519, and 0.314, compared to a P value of 0.005, 0.000, and 0.015 between AS of 1 and AS < 1. Based on these findings, individuals with an AS of 1 presented as NMs rather than IMs, while all others fit within their respective phenotype categories. Association between plasma RIS parameters and predicted phenotypes Based on the above findings, patients with an AS of 0, AS of 0.25–0.75, and AS of 1–2 presented as, and were thus classified, as PM, IM, and NM, respectively. Fifty-six percentages of patients (n = 111) were NMs, followed by IMs (n = 87, 43.7%). There was only one patient with a predicted PM phenotype of 0.5%. There were statistically significant differences for the plasma RIS concentration ( P < 0.001) and RIS/9-OH-RIS ratio ( P < 0.001) when subjects were categorized as described above (Table and Fig. ). The plasma concentration of RIS among IMs (AS = 0.25–0.75, 1.44 ng/ml) was significantly higher compared to that among NMs (AS = 1–2, 0.25 ng/ml, P < 0.001) and lower when compared to that found in the PM individual (2.67 ng/ml). The RIS/9-OH-RIS ratio in IM subjects was statistically significantly higher than the ratio observed in the NMs (AS = 1–2, 0.20 vs. 0.04, P < 0.001). These patients also had a significantly higher C/D of RIS than NMs (1.63 vs. 0.29 ng/ml/mg, P < 0.001).
Our sample consisted of 199 children and adolescents with a mean age of 9.25 (SD; 3.93) years who had been diagnosed with autism spectrum disorders. Demographic data are presented in Table . Participants were treated with a RIS-based regimen. One hundred and eighteen patients (59.3%) received RIS monotherapy. The medications that were concomitantly prescribed to patients were methylphenidate, sodium valproic acid, benzhexol, topiramate, cetirizine, clonazepam, hypodine, phenytoin, and phenobarbital. There were no significant differences for RIS or 9-OH-RIS between children and adolescents. Most of which were male (174; 87.44%). There were also no significant differences for RIS or 9-OH-RIS between males and females nor those receiving monotherapy and polytherapy.
CYP2D6 alleles and genotypes The CYP2D6*10 decreased function allele was the most common allele identified among the 199 subjects at 51.8%. The frequencies of the normal function alleles CYP2D6*1 and CYP2D6*2 were 25.1% and 6.3%, respectively. Another decreased function allele, CYP2D6*41 , was observed at 6.8%. CYP2D6*4 and CYP2D6*5 , both nonfunctional alleles, were found at frequencies of 1.3% and 8.3%, respectively. We also observed two subjects with the rare CYP2D6*14 allele (0.50%) in this study cohort. CYP2D6 allele frequencies are presented in Table . Of the 398 alleles, 125 were normal function (aggregate frequency of 31.4%) and were assigned a value of 1 to calculate the AS while 29 decreased function alleles (aggregate frequency of 7.3%) received a value of 0.5 and 38 no function alleles (aggregate frequency of 9.6%) received a value of 0. Genotype frequencies are summarized in Supplementary Table . Of the 20 CYP2D6 genotypes identified, CYP2D6*1/*10 was the most frequent (29.6%), followed by CYP2D6*10/*10 , CYP2D6*5/*10, and CYP2D6*10/*41 (26.1%, 7.5%, and 7.5%, respectively).
The relationship between CYP2D6 AS, RIS plasma concentration, and the 9-OH-RIS metabolite was examined in 199 patients (Table ). Patients were divided into eight groups (AS of 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, and 2). The most common AS was 1.25 (35.18%), comprising CYP2D6*1/*10 and CYP2D6*2/*10 genotypes. There were significant differences in RIS, the metabolic ratio RIS/9-OH-RIS, and C/D of RIS plasma concentrations between AS of 0.25, 0.5, 0.75, and 1, 1.25, 1.5, 2. There was a significant difference between patients when divided into two groups, one with AS < 1 and the other with AS ≥ 1. Plasma levels of RIS and RIS/9-OH-RIS ratio, and plasma C/D of RIS in patients with AS < 1 were significantly higher than those in patients with AS ≥ 1 ( P value < 0.001 among three drug parameters) (Fig. A–C). When genotypes with an AS of 1 were categorized as IM, significance of RIS, RIS/9-OH-RIS ratio, and RIS C/D between AS of 1 and AS > 1 was considerably lower as reflected by a P value of 0.412, 0.519, and 0.314, compared to a P value of 0.005, 0.000, and 0.015 between AS of 1 and AS < 1. Based on these findings, individuals with an AS of 1 presented as NMs rather than IMs, while all others fit within their respective phenotype categories.
Based on the above findings, patients with an AS of 0, AS of 0.25–0.75, and AS of 1–2 presented as, and were thus classified, as PM, IM, and NM, respectively. Fifty-six percentages of patients (n = 111) were NMs, followed by IMs (n = 87, 43.7%). There was only one patient with a predicted PM phenotype of 0.5%. There were statistically significant differences for the plasma RIS concentration ( P < 0.001) and RIS/9-OH-RIS ratio ( P < 0.001) when subjects were categorized as described above (Table and Fig. ). The plasma concentration of RIS among IMs (AS = 0.25–0.75, 1.44 ng/ml) was significantly higher compared to that among NMs (AS = 1–2, 0.25 ng/ml, P < 0.001) and lower when compared to that found in the PM individual (2.67 ng/ml). The RIS/9-OH-RIS ratio in IM subjects was statistically significantly higher than the ratio observed in the NMs (AS = 1–2, 0.20 vs. 0.04, P < 0.001). These patients also had a significantly higher C/D of RIS than NMs (1.63 vs. 0.29 ng/ml/mg, P < 0.001).
To the best of our knowledge, this is the first study applying the revised CPIC recommendations for the translation of CYP2D6 genotype to phenotype in an Asian population. This new method is anticipated to have a considerable impact on Asians compared to other populations due to the high frequency of the CYP2D6*10 allele. This allele conveys a considerable decrease in function and thus was downgraded, i.e., now receives a lower value for AS calculation, to improve the accuracy of phenotype prediction. The CPIC recommendations are drug-agnostic, i.e., the phenotype does not take substrate-specificity into account. Thus, in addition to evaluating whether the revised value for CYP2D6*10 improves the relationship between RIS, RIS/9-OH-RIS ratio, and C/D of RIS, we also assessed whether phenotype groupings, as recommended by CPIC, are appropriate for RIS. Owing to the revised AS definition, a notable number of subjects would be reclassified as IMs (Fig. ). Specifically, 17 subjects with an AS of 1 which were grouped as NM under the old method would be grouped as IMs under the new method. Their observed phenotype, however, identified them as NMs suggesting that the recommended classification system does not improve phenotype prediction for RIS. In contrast, using the lower value of 0.25 for CYP2D6*10 AS calculation did improve the relationship between AS and RIS, RIS/9-OH-RIS ratio, and C/D of RIS. Similar findings were observed by Brown et al. who showed that systemic exposure of atomoxetine (AUC0-∞) of AS of 1 was not significantly different from that observed for subjects with an AS of 1.5 or 2 . In addition, Frederiksen et al , demonstrated allele-specific metabolism of vortioxetine suggesting substantial differences among decreased function allele. Taken together, these findings raise awareness of the limitations and pitfalls of drug-agnostic genotype to phenotype translation methods. This is further substantiated by the plasma concentrations of RIS and RIS/9-OH-RIS ratios being significantly higher in AS of 0.25–0.75 than AS of 1–2 arguing that the former should be classified as IMs and the latter as NMs. Therefore, to predict CYP2D6 phenotype for RIS treatment, genotype should be translated into phenotype as shown in Table . Additionally, the CYP2D6 genotype (or AS) had a substantial impact on the trough dose-corrected plasma concentration of RIS. In accordance with results we previously reported for a different cohort, there were statistically significant differences in the plasma concentration for RIS ( P < 0.001) and the RIS/9-OH-RIS ratio ( P < 0.001) among phenotype groups in Thai autism children , . Furthermore, PM patients had significantly higher RIS C/D than those genotyped as CYP2D6*1/*1 . The same pattern was also observed in another study , i.e., the C/D ratio for RIS was significantly different in CYP2D6 PMs. The presence of the CYP2D6*10 allele was also associated with significantly higher levels of C/D of RIS levels at week 12 ( P = 0.003) in North Indian patients with schizophrenia . Moreover, plasma RIS/9-OH-RIS ratios were significantly higher in patients with an AS of 0.5 compared to those with an AS of 2 in an independent cohort of Thai subjects . Taken together, the RIS/9-OH-RIS metabolic ratio is a biomarker for CYP2D6 activity, which may be useful to guide the treatment of patients in need of psychotropic drugs . There were no significant differences in 9-OH-RIS and total active moiety concentrations among the CYP2D6 predicted phenotype groups, as found in an earlier study . Similarly, the total active moiety, sum of the plasma concentrations of RIS and 9-OH-RIS, corrected for the dose, did not significantly differ between individuals of different genotypes. These findings are consistent with a previous study in another Thai cohort of ASD patients , that showed no significant differences in 9-OH-RIS and active moiety concentrations. This finding is consistent with a previous study using positron emission tomography scans of healthy volunteers after receiving a single oral dose of RIS showing that plasma concentrations of the sum of RIS and 9-OH-RIS partly overlapped between the NMs and PMs . Therefore, the plasma concentrations of the 9-OH-RIS and total active moiety are independent of the CYP2D6-related metabolism. It has been suggested that the efflux transporter ABCB1, as well as CYP3A5 can contribute to the steady-state plasma concentration of RIS, 9-OH-RIS, and active moiety , . As mentioned above, the CPIC-recommended drug-agnostic method to predict phenotype may not accurately predict phenotype across all drugs and all allelic variants. Regardless of the imperfections and shortcomings of the method, using a standardized system, although imperfect, is preferable because it makes comparisons of results among studies easier. However, it also demonstrates the need to develop more sophisticated algorithms that take substrate specificity, among other patient-specific information, into account. We acknowledge the following limitations of the Luminex platform. This test does not quantitatively determine copy number nor does it determine which allele is duplicated or identify any other structural variants. Furthermore, only the most common alleles are tested. We speculate that some subjects may have rare or novel alleles which may explain some of the outliers shown in Fig. . In conclusion, the new CPIC recommended genotype to phenotype translation method, developed to promote standardized phenotype classification has its limitations for RIS. Using AS, rather than phenotype may be more accurate for this drug, especially considering the broad range of CYP2D6 activity and substrate specify. The findings of our study provide valuable information to further the implementation of genotype-guided risperidone treatment.
Supplementary Information 1.
|
Culture-Based Standard Methods for the Isolation of | 21354781-3a86-41a2-a946-1404f4cbac8d | 11639288 | Microbiology[mh] | Campylobacter genus is comprised of Gram-negative, non-spore-forming rods and spiral and coccoid/spherical bacteria . Campylobacter species such as C. hyointestinalis, C. jejuni, C. fetus, C. coli, C. upsaliensis , and C. lari are important human pathogens . They cause food and water-borne enteric infections worldwide, which occur after the consumption of non-chlorinated well water or contaminated surface water , undercooked red meat or poultry products , and unpasteurized milk . Direct contact with infected individuals might also transmit campylobacteriosis. Similarly, nosocomial infection can occur, but reports of congenital transmission are rare. Children can acquire campylobacteriosis from immature diarrhetic animals . During the past three decades, Campylobacter species have emerged as important clinical pathogens of acute enteritis in Western countries. C. coli and C. jejuni are particularly important regarding gastrointestinal infections. Moreover, a link has been established between Guillain-Barre syndrome (GBS) and C. jejuni infection, further enhancing this species’ importance . Campylobacteriosis occurrence is significantly higher in developing countries than in developed nations, and Campylobacter -associated infection affects a large number of children in developing countries. Community-based investigations in developing countries have revealed that 60,000/100,000 children (aged < 5 years) suffer from campylobacteriosis, which establishes it as a pediatric disease . Human campylobacteriosis cases (> 90%) in developed countries mainly occur in the summer season with the increased consumption of undercooked meats at outdoor facilities. Campylobacteriosis could occur in individuals of all ages, but the infection rate is high in children (< 4 years) and young adults (15-44 years) . A relatively low infectious dose of C. jejuni (500-800 organisms) has been estimated in humans . Standard Campylobacter spp. isolation from food and water is performed through selective enrichment and selective plating. Public Health England (PHE), US Food and Drug Administration (FDA), and International Organization for Standardization (ISO) have developed different Campylobacter spp. detection methods involving specific sample preparation steps, selective plating media, and enrichment broths. This study overviews standard culture-based methods (ISO, PHE, and USFDA) for isolating Campylobacter spp. from food and water samples. It assessed the efficiency of culture media, pre-enrichment, selective enrichment, and selective plating media for detecting and monitoring Campylobacter spp. The review also elaborates on novel chromogenic culture media (advantages and constraints) to differentiate Campylobacter spp. in different samples (food and water). The limitations associated with culture-based detection of viable but non-culturable (VBNC) cells are discussed as well. Moreover, the review highlights alternative techniques and improvements for precise and efficient detection of Campylobacter spp. Overall, the study provides detailed insights regarding detecting Campylobacter spp. in food and water samples. Standard Campylobacter spp. detection in food and water samples often requires pre-enrichment followed by selective enrichment (at certain intervals and temperatures) and selective plating. The complete isolation process (identification and confirmation) could take up to 7 days . Campylobacter spp. detection procedures of , ISO , and US-FDA Bacteriological Analytical Manual (FDA-BAM) share some similarities. However, FDA-BAM recommends the same Campylobacter spp. isolation procedure for all types of samples (shellfish, milk, cheese, and water). ISO recommends three different methods according to the Campylobacter spp. contamination levels and background bacteria in food and water samples. PHE also employs the same procedure for different surface water and environmental samples. FDA protocols FDA has established five processing procedures before Campylobacter spp. detection from food and water samples . Sample preparation differs, whereas the detection procedure remains similar for all sample types and food sample/homogenate quantiles. Campylobacter spp. isolation from most foods [vegetables, poultry , water , shellfish , milk , and cheese ] require a Bolton broth-based pre-enrichment under microaerobic conditions (N 2 : 85%, CO 2 : 10%, and O 2 : 5%). Pre-enrichment temperature and incubation time could vary among various sample types , which is followed by enrichment (20-44 hours, 42°C) under microaerobic conditions. The enrichment culture is streaked on an FDA-recommended selective plating media (Abeyta-Hunt-Bark agar (AHB), modified charcoal cefoperazone deoxycholate agar (mCCDA), or Abeyta-Hunt-Bark without antibiotics). Then, plates are incubated under microaerobic conditions (24-48 hours, 42°C) . FDA protocol for the identification and confirmation of presumptive Campylobacter spp. colonies often relies on biochemical features of Campylobacter spp. . Initially, suspected colonies are examined for oxidase and catalase and oxidase followed by physiological and biochemical tests such as nitrate reduction, Hippurate hydrolysis, reaction on triple sugar iron agar (TSI), nalidixic acid resistance, growth at different temperatures (42°C, 35-37°C, and 25°C), and growth in glycerin. enlists the biochemical and physiological features of different Campylobacter spp. to confirm their identification. ISO Protocols ISO proposed three procedures for Campylobacter spp. isolation according to their contamination levels and background bacteria . Samples with lower numbers of Campylobacter spp. and background bacteria are subjected to Bolton broth-based pre-enrichment (4-6 hours, 37°C) followed by enrichment under a microaerobic atmosphere (44 hours, 41.5°C). Then, selective plating is carried out using mCCDA and another media of choice, followed by incubation (44 hours, 41.5°C) (Procedure A, ) . Preston broth is used for the selective enrichment (24 hours, 41.5°C) of Campylobacter spp. in samples with their lower numbers and high background bacteria followed by mCCDA mediabased selective plating as mentioned in procedure A (Procedure B, ) . The samples with high Campylobacter spp. levels are subjected to direct plating on selective agar (mCCDA) without enrichment steps (Procedure C, ) . ISO recommends a colony count method for Campylobacter spp. enumeration in food and water samples . It is carried out by spreading water and milk samples (1 ml) or food homogenate (1 ml) on a well-dried mCCDA plate surface. This approach can also be followed for samples’ serial dilutions. Then, the plates are incubated (40-44 hours, 41.5°C) under microaerobic conditions without pre-enrichment or enrichment steps . ISO recommends the microscopic confirmation of suspected Campylobacter spp. colonies through motility and morphological appearance . Moreover, aerobic growth (25°C) and oxidase activity should also be analyzed. Other biochemical tests can be performed as well to differentiate Campylobacter spp. colonies . ISO protocols also suggest PCR-based molecular identification and confirmation of presumptive Campylobacter spp. colonies . PHE protocols PHE protocols of Campylobacter spp. isolation from food samples involve enrichment of homogenate (25 g) in Bolton broth (10 −1 dilution) and incubations for 5 hours at 37°C and 44 hours at 41.5°C . Then, enrichment cultures are streaked on mCCDA media plates and microaerobically incubated (44 hours, 41.5°C) . The microaerobic growth (41.5°C) of presumptive Campylobacter spp. colonies is compared to the aerobic growth (25°C) on blood agar plates to confirm their identity . The procedure involves the examination of five suspected colonies from each mCCDA plate. PHE protocol also recommends other confirmatory steps such as cell motility’s microscopic examination and Oxidase test. Furthermore, PHE also recommends optional confirmation through PCR assay and latex test kits [ Campylobacter Latex Kit (LIOFILCHEM ® S.r.l., Italy), Oxoid™ DrySpot™ Campylobacter Test Kit (Thermo Fisher Scientific, Inc., USA), and Campylobacter Confirm Latex kit (Bio-Rad Laboratories, Inc., USA)] . FDA has established five processing procedures before Campylobacter spp. detection from food and water samples . Sample preparation differs, whereas the detection procedure remains similar for all sample types and food sample/homogenate quantiles. Campylobacter spp. isolation from most foods [vegetables, poultry , water , shellfish , milk , and cheese ] require a Bolton broth-based pre-enrichment under microaerobic conditions (N 2 : 85%, CO 2 : 10%, and O 2 : 5%). Pre-enrichment temperature and incubation time could vary among various sample types , which is followed by enrichment (20-44 hours, 42°C) under microaerobic conditions. The enrichment culture is streaked on an FDA-recommended selective plating media (Abeyta-Hunt-Bark agar (AHB), modified charcoal cefoperazone deoxycholate agar (mCCDA), or Abeyta-Hunt-Bark without antibiotics). Then, plates are incubated under microaerobic conditions (24-48 hours, 42°C) . FDA protocol for the identification and confirmation of presumptive Campylobacter spp. colonies often relies on biochemical features of Campylobacter spp. . Initially, suspected colonies are examined for oxidase and catalase and oxidase followed by physiological and biochemical tests such as nitrate reduction, Hippurate hydrolysis, reaction on triple sugar iron agar (TSI), nalidixic acid resistance, growth at different temperatures (42°C, 35-37°C, and 25°C), and growth in glycerin. enlists the biochemical and physiological features of different Campylobacter spp. to confirm their identification. ISO proposed three procedures for Campylobacter spp. isolation according to their contamination levels and background bacteria . Samples with lower numbers of Campylobacter spp. and background bacteria are subjected to Bolton broth-based pre-enrichment (4-6 hours, 37°C) followed by enrichment under a microaerobic atmosphere (44 hours, 41.5°C). Then, selective plating is carried out using mCCDA and another media of choice, followed by incubation (44 hours, 41.5°C) (Procedure A, ) . Preston broth is used for the selective enrichment (24 hours, 41.5°C) of Campylobacter spp. in samples with their lower numbers and high background bacteria followed by mCCDA mediabased selective plating as mentioned in procedure A (Procedure B, ) . The samples with high Campylobacter spp. levels are subjected to direct plating on selective agar (mCCDA) without enrichment steps (Procedure C, ) . ISO recommends a colony count method for Campylobacter spp. enumeration in food and water samples . It is carried out by spreading water and milk samples (1 ml) or food homogenate (1 ml) on a well-dried mCCDA plate surface. This approach can also be followed for samples’ serial dilutions. Then, the plates are incubated (40-44 hours, 41.5°C) under microaerobic conditions without pre-enrichment or enrichment steps . ISO recommends the microscopic confirmation of suspected Campylobacter spp. colonies through motility and morphological appearance . Moreover, aerobic growth (25°C) and oxidase activity should also be analyzed. Other biochemical tests can be performed as well to differentiate Campylobacter spp. colonies . ISO protocols also suggest PCR-based molecular identification and confirmation of presumptive Campylobacter spp. colonies . PHE protocols of Campylobacter spp. isolation from food samples involve enrichment of homogenate (25 g) in Bolton broth (10 −1 dilution) and incubations for 5 hours at 37°C and 44 hours at 41.5°C . Then, enrichment cultures are streaked on mCCDA media plates and microaerobically incubated (44 hours, 41.5°C) . The microaerobic growth (41.5°C) of presumptive Campylobacter spp. colonies is compared to the aerobic growth (25°C) on blood agar plates to confirm their identity . The procedure involves the examination of five suspected colonies from each mCCDA plate. PHE protocol also recommends other confirmatory steps such as cell motility’s microscopic examination and Oxidase test. Furthermore, PHE also recommends optional confirmation through PCR assay and latex test kits [ Campylobacter Latex Kit (LIOFILCHEM ® S.r.l., Italy), Oxoid™ DrySpot™ Campylobacter Test Kit (Thermo Fisher Scientific, Inc., USA), and Campylobacter Confirm Latex kit (Bio-Rad Laboratories, Inc., USA)] . Campylobacter spp. isolation Enrichment media for Campylobacter spp Foodborne Campylobacter spp. is conventionally recovered by adopting culturing and isolation methodologies . Selective enrichment is the initial step in conventional Campylobacter spp. recovery methods, which is followed by selective plating for isolation and confirmatory tests (immunological, molecular, and biochemical). The enrichment broths could revive stressed and inhibitor-exposed bacteria in the tested matrix and facilitate the recovery of isolated bacteria even at low concentrations . Enrichment broths differ in nutrient composition, incubation time, oxygen-degradation, environment and temperature requirements, and antimicrobial substances are added to restrain the growth of other competing microorganisms . Numerous enrichment broth formulations have been formulated for Campylobacter spp. isolation including Exeter broth, Bolton broth (BB), Modified CCD broth, Preston broth (PB), Doyle and Roman broth, and Rosef and Kapperude Campylobacter enrichment broth . Bolton broth and Preston formula are crucial for primary selective enrichment and are recommended due to satisfactory output, particularly in low bacterial count and stressed bacteria . The enrichment stage enhances microfloral growth in the target samples. Therefore, selective substances should be used to optimize Campylobacter spp. growth conditions for better recovery. There is no specific standard method for recovering Campylobacter species, particularly non-thermotolerant species . Antimicrobials-supplemented basal medium (nutrient broth or Brucella broth) is the main ingredient of enrichment broths . Enrichment broths were initially supplemented with lysed sheep or horse blood to reduce oxidative toxins’ damage . However, comparatively high blood cost and inessential isolation of Campylobacter spp., from poultry meat reduced their applications . Blood-free formulas are more convenient and can also be integrated with molecular techniques for rapid pathogen detection and identification. Campylobacter spp. isolation does not require a rich basal medium. USDA Food Safety and Inspection Service also uses blood-free Bolton broth, the best enrichment alternative . The buffered peptone water is quite similar to Bolton broth’s basal component and is equally effective for Campylobacter spp. isolation from broiler meat . Bolton broth is recommended for the enrichment of all sample types, particularly the US FDA recommends it for the recovery of Campylobacters spp. from various types of samples (environmental, food, and clinical). ISO also recommends Bolton broth for samples’ enrichment with lower Campylobacter spp. and background bacterial count . Bolton broth contains different nutrients, including yeast and peptone extract, sodium pyruvate, alpha-ketoglutaric acid, hemin, and sodium metabisulphite. Hemin helps in overcoming trimethoprim antagonism of yeast extracts . The addition of sodium metabisulphite and sodium pyruvate allows aerobic incubation, whereas sodium carbonate provides carbon dioxide for bacterial growth . The medium contains antibiotics (cycloheximide, cefoperazone, trimethoprim, and vancomycin) and lysed horse blood. Antibiotics restrict the growth of non-specific contaminating microorganisms . Specific substrates in Bolton broth limit trimethoprim antagonism, whereas hemin, ferrous sulfate/sodium metabisulfite/sodium pyruvate (FBP) mixture, and blood enhance oxygen quenching . Vancomycin in Bolton broth suppresses Gram-positive cocci but has lower efficacy against Campylobacters spp. as compared to rifampicin in Exeter and Preston broths . However, Bolton broth is preferred for Campylobacter spp. isolation from poultry samples . Bolton broth remains unable to detect certain Campylobacter species ( C. coli and C. jejuni ) in vegetables and chicken . Antibiotics in Bolton broth enhance its selectivity . However, cefoperazone might reduce the selectivity in mCCDA (modified charcoal cefoperazone deoxycholate) agar and Bolton broth, which could be due to the absence of rifampicin and polymixin . However, it is still helpful for samples with lower numbers of sublethally damaged or stressed Campylobacter spp. and samples containing lower numbers of non-target organisms . Several modifications of incubation temperatures and selective agents have been suggested for accurate and improved Campylobacter spp. detection . Preston broth is another commonly used enrichment broth for the isolation of Campylobacter spp. from various complex samples, including environmental , turbid surface water, and food specimens . Preston medium is nutrient broth comprised of lysed horse blood and antibiotics (cycloheximide, rifampicin, polymyxin B, and trimethoprim). In contrast, it does not contain yeast extract (trimethoprim antagonist) . Rifampicin is highly effective against Gram-positive bacteria. The culture media is incubated at 42°C under a microaerobic atmosphere . The presence of cycloheximide/amphotericin B, polymyxin B, trimethoprim, and rifampicin significantly enhances the selectivity of Preston broth . Polymyxin B inhibits the growth of extended-spectrum beta-lactamase (ESBL) bacteria as it possesses high activity against Gram-negative bacteria . Therefore, samples with high background flora (ESBL bacteria) are preferably grown in Preston broth . Preston broth has demonstrated high selectivity against non-target flora during Campylobacter spp. enrichment . ISO also recommends Preston broth for Campylobacter spp. isolation from samples (poultry and milk) with high background bacteria . have compared different enrichment methods and noted better efficacy of Preston broth than Bolton broth, which allowed the growth of some Escherichia coli strains that could hinder Campylobacter spp. growth to produce false-negative outcomes. Contrarily, some studies have depicted inhibited growth of Campylobacter strains ( C. coli ) in Preston broth, leading to false negative results . Exeter broth is routinely used in various laboratories to analyze water and food samples. It is also a primary enrichment medium for freshwater microbiological investigations . Exeter selective broth’s formulation is based on a lysed horse blood (5%) supplemented nutrient broth . Later on, the formula was modified , and oxygen-quenching agents ferrous sulfate/sodium metabisulfite/sodium pyruvate (FBP) mixture of Boltonv et al. (1984a, b) were added, which allowed aerobic incubation of Exeter broth. Exeter broth also contained different antibiotics, including cefoperazone (against Pseudomonas spp., and Enterobacterales), rifampicin, polymyxin B, amphotericin (against yeasts and molds), and trimethoprim . Modified charcoal cefoperazone deoxycholate broth (mCCD) is another blood-free selective enrichment broth that was modified from the original charcoal cefazolin deoxycholate (CCD) medium . The mCCD broth was mainly comprised of nutrient broth, cefoperazone, casein hydrolysates, bacteriological charcoal, and FBP supplements (sodium pyruvate, sodium deoxycholate, and ferrous sulfate) . The mCCD broth contains different Campylobacter spp. growth-promoting components and helps in the direct isolation of Campylobacter spp. from animal and human feces . Charcoal, deoxycholate, and cefoperazone combination inhibits bacterial growth (commensal flora and common contaminants) in food and clinical samples. Rosef and Kapperud Campylobacter enrichment broth contains sodium chloride, peptone, and antimicrobials (polymyxin B, vancomycin, and trimethoprim). Cysteine hydrochloride and sodium succinate-supplemented Brucella broth served as the basal medium in Doyle and Roman enrichment broth (DREB) . Antibiotics (polymyxin B, vancomycin, cycloheximide, and trimethoprim) and lysed horse blood were also added for better enrichment efficiency. used Brucella broth as a basal medium and altered its selectivity through significantly increased concentrations of cycloheximide and polymyxin , facilitating the selective recovery of lower Campylobacter spp. numbers in food samples. Cysteine hydrochloride and Succinate were also added, whereas lysed horse blood (7%) acted as an oxygen-quenching system. The medium was able to analyze raw milk and hamburger (0.1 to 4.0 cells/gram) but remained ineffective for poultry samples, which might be due to the diverse types and amounts of flora in these samples. Therefore, the DREB medium was further modified to rapidly enrich C. jejuni from raw chicken carcass samples . Doyle and Roman enrichment broth was established as the most suitable for detecting lower C. jejuni levels in chicken meat samples after 12 months of storage at -18°C. performed a comparison study of Park and Stankiewiez enrichment broth, Doyle and Roman enrichment broth, and a newly developed enrichment broth for C. jejuni isolation from raw chicken and revealed the highest selectivity potential of Doyle and Roman enrichment broth . Plating media for Campylobacter spp All the Campylobacter spp. isolating plating media used for food samples are either direct compositions or modified forms of clinical media that were developed for Campylobacter spp. isolation from fecal and clinical samples. Different types of plating media are available for Campylobacter spp. isolation with varying selectivity. These media are divided into two groups such as bloodcontaining solid media known as Campylobacter blood agar plates [Skirrow agar, Campy Brucella agar (Campy-BAP), Butzler agar, and Preston agar], and charcoalbased solid plating media [Karmali agar, and mCCD agar] . Despite poor productivity and sensitivity in food samples, Karmali agar and mCCDA are the best media for Campylobacter spp. isolation as colonies are easily recognizable in both media . Charcoal compounds and blood can reduce the toxic oxygen derivatives to generate a microaerobic environment for Campylobacters growth. Agar plates were also developed without charcoal or blood, demonstrating considerably lower efficacy than charcoal or blood-added broths . The resistance of thermophilic Campylobacter spp. to various antibiotic combinations in media determines its efficacy. Antibiotics such as polymyxin, vancomycin, rifampicin, trimethoprim, cefoperazone, nystatin, cephalothin, cycloheximide, and colistin inhibit background microbiota growth in samples and allow the isolation of slow-growing Campylobacter spp. . The capability of contaminating-flora inhibition differentiates between various media. All the selective agents facilitate the growth of C. coli and C. jejuni . To date, no medium can inhibit C. coli while allowing the growth of C. jejuni or vice versa . Other Campylobacter species ( C. hyointestinalis, C. lari, C. fetus, C. upsaliensis , and C. helveticus ) also grow on most media to some extent, particularly at a less selective temperature of 37°C. Skirrow’s selective agar medium was the first widely recommended for C. coli and C. jejuni isolation from human feces . It replaced the complicated method of selective filtration through 0.65 μm pore-size membranes. Skirrow’s Campylobacter selective agar contains peptone, lysed horse blood (7%), and antibiotics (trimethoprim, vancomycin, and polymyxin B) . The addition of vancomycin (inhibits Gram-positives), trimethoprim (broad-spectrum antibiotic), and polymyxin B (antifungal) mixture enhances its selectivity. The addition of lysed horse blood neutralizes trimethoprim antagonists of basal medium, leading to promoted growth of polymyxins-resistant Gram-negative Proteus spp. The incubation temperature of 42°C also contributes to the medium’s selectivity. Thus, only thermophilic Campylobacter spp. can grow in Skirrow’s medium, whereas the growth of non-thermophilic strains ( C. fetus subsp. fetus ) is restricted at 42°C. Skirrow’s medium is sometimes used for Campylobacter spp. detection in food samples but remains ineffective for many other types of samples . Campylobacter -selective agars contain different antibiotic combinations. Blood-containing Campy Brucella agar plate, also known as Campy-BAP, has been widely used . Campy-BAP is a Brucella base agar that contains five antimicrobial agents (cephalothin, vancomycin, amphotericin B, polymyxin B, and trimethoprim), and is supplemented with sheep blood (10%) . Antibiotics such as polymyxin B, cephalothin, and colistin might inhibit the growth of C. coli, C. jejuni , and C. fetus subsp. fetus . compared various enrichment techniques and direct isolation media to enumerate five C. jejuni strains in stored/refrigerated chicken meat. Campy-BAP agar and blood-free Campylobacter medium exhibited higher C. jejuni strains detection capability than Doyle and Roman enrichment broth and modified Butzler agar . investigated five types of selective media, including Campy-BAP and charcoal cefazolin deoxycholate agar (CCDA). They noted a better recovery rate with the CCDA medium (83%) as compared to the Campy-BAP medium (75%). CCDA medium also effectively suppressed normal enteric flora contamination. Campylobacter spp. colonies, particularly C. coli , appeared atypical on the Campy-BAP medium. The strains mainly produced homogeneous, discrete, and grey colonies, which were difficult to differentiate from coliform colonies in several cases. Campylobacter spp. colonies exhibited transparent and moist growth on other media. The morphology of colonies on Campy-BAP medium complicated the Campylobacter spp. identification process . Preston Campylobacter selective agar was specifically developed for Campylobacter spp. isolation from diverse specimens (environmental, human, and animal) . prepared Preston medium by dissolving nutrient broth in New Zealand agar and adding horse blood (saponin-lysed) and antibiotics (trimethoprim, polymyxin, actidione, and rifampicin). Preston medium demonstrated high Campylobacter spp. isolation rate from all tested samples and remained the most selective medium compared to other media . Campylobacter agar (Butzler’s) is used to isolate Campylobacter species selectively from different specimens, including clinical samples. reported the first selective formulation containing sheep blood agar and five antimicrobials (cephalothin, novobiocin, colistin, actidione, and bacitracin) where bacitracin and cephazolin inhibited the Gram-positive bacteria, and colistin and novobiocin inhibited the Gram-negative enteric flora. Further addition of cycloheximide inhibited the growth of common clinical mycotic contaminants. This medium was developed as an alternative to filtration and culturing on an elective blood-thioglycollate agar medium employed to examine human blood and fecal samples for vibrios . Cephalothin, in addition to the original formula (bacitracin, cycloheximide, novobiocin, and colistin), significantly enhanced its selectivity with the filtration method. Sheep blood agar serves as the basal medium in Butzler’s agar . Initial incubation is carried out at 42°C, and the temperature gradually decrease to grow C. jejuni but hinders the growth of C. etus subspecies intestinalis . Modified charcoal cefoperazone deoxycholate agar (mCCDA) is enlisted in international standard protocols and is widely used worldwide, where it is recommended as the plating media of choice for the detection and enumeration of Campylobacter spp. . It generates satisfactory results and is recommended for selective plating. mCCDA medium is based on the formula and is comprised of New Zealand agar, nutrient broth, bacteriological charcoal, sodium pyruvate, casein hydrolysates, ferrous sulfate, and sodium deoxycholate . The selectivity of this media was further enhanced by replacing cephazolin with cefoperazone . Initially, its development was aimed at thermotolerant Campylobacter spp. isolation from human fecal samples but then emerged as a specified standard medium for Campylobacter spp. isolation from food samples. The blood was replaced with sodium pyruvate, charcoal, and ferrous sulfate in the mCCDA medium, increasing the aerotolerance and growth of Campylobacter spp. Casein hydrolysate in this medium promotes the growth of C. lari environmental strains, whereas sodium deoxycholate and cefoperazone provide the required selectivity . Campylobacter selective mCCDA agar is a widely used blood-free plating medium . Thus, it helps to avoid the disadvantages of blood, such as easy contamination, short life, and expensive nature . The stickiness of Campylobacter spp. colonies to the plate surface in some cases is the only limitation that complicates harvesting . mCCDA and Skirrow media containing different antimicrobials have been used for culturing Campylobacter spp. where cefoperazone in mCCDA media proved a more effective selective agent and efficiently suppressed the enteric flora . The higher efficacy of broad-spectrum cefoperazone (cephalosporin) has been established against Enterobacteriaceae family members and pseudomonads . Karmali is a charcoal-based, blood-free selective medium. It is comprised of Columbia agar base, hematin, activated charcoal, sodium pyruvate, cycloheximide, cefoperazone, and vancomycin . Karmali medium was developed to overcome mCCDA selective agar-associated limitations . demonstrated significantly higher selectivity of Karmali agar and better Campylobacter spp. isolation rate from fecal samples as compared to Skirrow’s medium. Similar to blood, charcoal also acts as a quenching agent for enhanced aerotolerance against oxygen derivatives’ toxicity . Thus, charcoal-based agar is a better alternative for blood-containing agar in developing countries, which face erratic availability of sterile blood. Karmali agar medium contains sodium pyruvate in the selective supplement, whereas it is found in the basal medium of other blood-free Campylobacter spp. isolation media (mCCDA). Ferrous sulphate in mCCDA media is replaced with hemin in the Karmali medium. Vancomycin in the Karmali medium replaces the deoxycholate of mCCDA media and strongly inhibits the growth of Gram-positive microorganisms. Vancomycin is particularly effective against enterococci and thus eliminates bile salts’ inherent variability. Cefoperazone in this medium efficiently suppresses the growth of Pseudomonas spp. whereas cycloheximide more effectively inhibits yeasts than amphotericin B . The three antibioticselective agents (cycloheximide, cefoperazone, and vancomycin) in the Karmali medium efficiently restrict the growth of Gram-negative and Grampositive bacteria, and yeasts. During the development of this media, the efficacy of these antibiotic-selective agents was individually assessed . Karmali medium has been proven more selective than Skirrow medium. Some C. coli strains are cephalosporins-susceptible and the Skirrow medium performs better for isolating these strains than the Karmali medium. During a study, combining Skirrow and Karmali mediums produced near-optimal results for thermotolerant Campylobacter spp. isolation from fecal samples . revealed that charcoal and cefoperazone-containing Campylobacter spp. isolation media (mCCDA) generated better outcomes than earlier formulations. Chromogenic plating media for Campylobacter spp Adding chromogenic agar media in isolation protocols enhanced Campylobacter species identification capability through distinctive colony color. Synthetic chromogenic enzyme substrates in chromogenic media aid in their utility as both differential and selective media to identify the target isolate through their enzyme activity . A few commercial chromogenic agar plates are available in Latin America, the USA, and Europe. These plates are utilized to isolate Campylobacter spp. from meat, carcass rinse, environmental samples, and poultry meat. Campylobacter spp. isolating (food samples) sensitivity of chromogenic agars is similar to traditional plates . CHROMagar™ Campylobacter (CHROMagar™, France), CampyFood ® agar (bioMérieux, France), R&F ® Campylobacter media (R&F Products, USA), and Brilliance™ CampyCount Agar (Thermo Scientific™, Thermo Fisher Scientific, Inc., USA) are Campylobacter chromogenic media, facilitating visual recognition of Campylobacter spp. colonies without requiring subsequent culturing and confirmation tests. Thus, these media decrease the analysis’s cost and time . Campy-Food ® agar was the first commercial chromogenic-like agar that matched CCDA capabilities to isolate and enumerate Campylobacter spp. ( C. coli and C. jejuni ) from poultry samples. CampyFood ® agar plating medium was recommended for its high selectivity and better performance than the specificity (68%) and sensitivity (100%) of mCCDA . Moreover, it eliminates or minimizes the contamination of swarming and spreading colonies in tested samples . CampyFood ® agar plates are easy to handle and produce results comparable to those of other media . However, some other bacterial species might also grow on the plates, leading to Campylobacter spp. colonies’ overestimation. Therefore, the CampyFood ® medium is not entirely different for Campylobacter spp. isolation . An investigation in Chile reported a higher CampyFood ® medium-based Campylobacter spp. isolation rate from chicken meat (83%) than mCCDA (67%) . A selective chromogenic medium, Brilliance™ Campy Count agar, was developed explicitly for Campylobacter spp. ( C. coli and C. jejuni ) enumeration from poultry samples. Brilliance™ CampyCount agar is comprised of an amino acid and salt mix that allows accurate, clear, and specific C. coli and C. jejuni enumeration on poultry carcass samples . Brilliance™ CampyCount medium was carefully developed to achieve C. coli and C. jejuni growth while inhibiting the growth of non-target microorganisms. This medium indicates the colonies through a color change to dark red. Thus, all C. jejuni / coli colonies become readily identifiable within 48 hours on a transparent BCC medium . indicated that Brilliance™ CampyCount and CampyFood ® media efficiencies were comparable to mCCDA for Campylobacters spp. enumeration in naturally contaminated chicken meat. Brilliance™ CampyCount agar can be a potential alternative to mCCDA, but further investigation is required to enhance its selectivity for improved accuracy of Campylobacter spp. enumeration and minimum background microflora . CHROMagar™ Campylobacter is a selective chromogenic medium that is widely used for presumptive identification, direct qualitative detection, and differentiation of main thermo-tolerant Campylobacter spp. ( C. lari, C. jejuni , and C. coli ) from environmental and food samples by following the method. CHROMagar™ Campylobacter comprises of a chromogenic substrate, agar, yeast extract, peptones, Sodium chloride, and a selective mix . CHROMagar™ Campylobacter is also a blood-free transparent agar-like CLA-S medium that helps visualize and enumerate Campylobacter spp. colony forming unit (CFU) by producing purple colonies . R&F ® Campylobacter chromogenic agar plating medium targets the C-2 esterase enzyme of C. coli and C. jejuni. C. coli and C. jejuni are C-2 esterase positive, whereas other microorganisms remain negative to this enzyme. R&F ® Campylobacter chromogenic medium has enhanced sensitivity, and visual identification can easily distinguish the colonies. All current Campylobacter spp. isolation broth, media, and plates are modifications of media, which were developed almost three decades ago when achieving microaerobic conditions in the laboratories was challenging . Campylobacter spp Foodborne Campylobacter spp. is conventionally recovered by adopting culturing and isolation methodologies . Selective enrichment is the initial step in conventional Campylobacter spp. recovery methods, which is followed by selective plating for isolation and confirmatory tests (immunological, molecular, and biochemical). The enrichment broths could revive stressed and inhibitor-exposed bacteria in the tested matrix and facilitate the recovery of isolated bacteria even at low concentrations . Enrichment broths differ in nutrient composition, incubation time, oxygen-degradation, environment and temperature requirements, and antimicrobial substances are added to restrain the growth of other competing microorganisms . Numerous enrichment broth formulations have been formulated for Campylobacter spp. isolation including Exeter broth, Bolton broth (BB), Modified CCD broth, Preston broth (PB), Doyle and Roman broth, and Rosef and Kapperude Campylobacter enrichment broth . Bolton broth and Preston formula are crucial for primary selective enrichment and are recommended due to satisfactory output, particularly in low bacterial count and stressed bacteria . The enrichment stage enhances microfloral growth in the target samples. Therefore, selective substances should be used to optimize Campylobacter spp. growth conditions for better recovery. There is no specific standard method for recovering Campylobacter species, particularly non-thermotolerant species . Antimicrobials-supplemented basal medium (nutrient broth or Brucella broth) is the main ingredient of enrichment broths . Enrichment broths were initially supplemented with lysed sheep or horse blood to reduce oxidative toxins’ damage . However, comparatively high blood cost and inessential isolation of Campylobacter spp., from poultry meat reduced their applications . Blood-free formulas are more convenient and can also be integrated with molecular techniques for rapid pathogen detection and identification. Campylobacter spp. isolation does not require a rich basal medium. USDA Food Safety and Inspection Service also uses blood-free Bolton broth, the best enrichment alternative . The buffered peptone water is quite similar to Bolton broth’s basal component and is equally effective for Campylobacter spp. isolation from broiler meat . Bolton broth is recommended for the enrichment of all sample types, particularly the US FDA recommends it for the recovery of Campylobacters spp. from various types of samples (environmental, food, and clinical). ISO also recommends Bolton broth for samples’ enrichment with lower Campylobacter spp. and background bacterial count . Bolton broth contains different nutrients, including yeast and peptone extract, sodium pyruvate, alpha-ketoglutaric acid, hemin, and sodium metabisulphite. Hemin helps in overcoming trimethoprim antagonism of yeast extracts . The addition of sodium metabisulphite and sodium pyruvate allows aerobic incubation, whereas sodium carbonate provides carbon dioxide for bacterial growth . The medium contains antibiotics (cycloheximide, cefoperazone, trimethoprim, and vancomycin) and lysed horse blood. Antibiotics restrict the growth of non-specific contaminating microorganisms . Specific substrates in Bolton broth limit trimethoprim antagonism, whereas hemin, ferrous sulfate/sodium metabisulfite/sodium pyruvate (FBP) mixture, and blood enhance oxygen quenching . Vancomycin in Bolton broth suppresses Gram-positive cocci but has lower efficacy against Campylobacters spp. as compared to rifampicin in Exeter and Preston broths . However, Bolton broth is preferred for Campylobacter spp. isolation from poultry samples . Bolton broth remains unable to detect certain Campylobacter species ( C. coli and C. jejuni ) in vegetables and chicken . Antibiotics in Bolton broth enhance its selectivity . However, cefoperazone might reduce the selectivity in mCCDA (modified charcoal cefoperazone deoxycholate) agar and Bolton broth, which could be due to the absence of rifampicin and polymixin . However, it is still helpful for samples with lower numbers of sublethally damaged or stressed Campylobacter spp. and samples containing lower numbers of non-target organisms . Several modifications of incubation temperatures and selective agents have been suggested for accurate and improved Campylobacter spp. detection . Preston broth is another commonly used enrichment broth for the isolation of Campylobacter spp. from various complex samples, including environmental , turbid surface water, and food specimens . Preston medium is nutrient broth comprised of lysed horse blood and antibiotics (cycloheximide, rifampicin, polymyxin B, and trimethoprim). In contrast, it does not contain yeast extract (trimethoprim antagonist) . Rifampicin is highly effective against Gram-positive bacteria. The culture media is incubated at 42°C under a microaerobic atmosphere . The presence of cycloheximide/amphotericin B, polymyxin B, trimethoprim, and rifampicin significantly enhances the selectivity of Preston broth . Polymyxin B inhibits the growth of extended-spectrum beta-lactamase (ESBL) bacteria as it possesses high activity against Gram-negative bacteria . Therefore, samples with high background flora (ESBL bacteria) are preferably grown in Preston broth . Preston broth has demonstrated high selectivity against non-target flora during Campylobacter spp. enrichment . ISO also recommends Preston broth for Campylobacter spp. isolation from samples (poultry and milk) with high background bacteria . have compared different enrichment methods and noted better efficacy of Preston broth than Bolton broth, which allowed the growth of some Escherichia coli strains that could hinder Campylobacter spp. growth to produce false-negative outcomes. Contrarily, some studies have depicted inhibited growth of Campylobacter strains ( C. coli ) in Preston broth, leading to false negative results . Exeter broth is routinely used in various laboratories to analyze water and food samples. It is also a primary enrichment medium for freshwater microbiological investigations . Exeter selective broth’s formulation is based on a lysed horse blood (5%) supplemented nutrient broth . Later on, the formula was modified , and oxygen-quenching agents ferrous sulfate/sodium metabisulfite/sodium pyruvate (FBP) mixture of Boltonv et al. (1984a, b) were added, which allowed aerobic incubation of Exeter broth. Exeter broth also contained different antibiotics, including cefoperazone (against Pseudomonas spp., and Enterobacterales), rifampicin, polymyxin B, amphotericin (against yeasts and molds), and trimethoprim . Modified charcoal cefoperazone deoxycholate broth (mCCD) is another blood-free selective enrichment broth that was modified from the original charcoal cefazolin deoxycholate (CCD) medium . The mCCD broth was mainly comprised of nutrient broth, cefoperazone, casein hydrolysates, bacteriological charcoal, and FBP supplements (sodium pyruvate, sodium deoxycholate, and ferrous sulfate) . The mCCD broth contains different Campylobacter spp. growth-promoting components and helps in the direct isolation of Campylobacter spp. from animal and human feces . Charcoal, deoxycholate, and cefoperazone combination inhibits bacterial growth (commensal flora and common contaminants) in food and clinical samples. Rosef and Kapperud Campylobacter enrichment broth contains sodium chloride, peptone, and antimicrobials (polymyxin B, vancomycin, and trimethoprim). Cysteine hydrochloride and sodium succinate-supplemented Brucella broth served as the basal medium in Doyle and Roman enrichment broth (DREB) . Antibiotics (polymyxin B, vancomycin, cycloheximide, and trimethoprim) and lysed horse blood were also added for better enrichment efficiency. used Brucella broth as a basal medium and altered its selectivity through significantly increased concentrations of cycloheximide and polymyxin , facilitating the selective recovery of lower Campylobacter spp. numbers in food samples. Cysteine hydrochloride and Succinate were also added, whereas lysed horse blood (7%) acted as an oxygen-quenching system. The medium was able to analyze raw milk and hamburger (0.1 to 4.0 cells/gram) but remained ineffective for poultry samples, which might be due to the diverse types and amounts of flora in these samples. Therefore, the DREB medium was further modified to rapidly enrich C. jejuni from raw chicken carcass samples . Doyle and Roman enrichment broth was established as the most suitable for detecting lower C. jejuni levels in chicken meat samples after 12 months of storage at -18°C. performed a comparison study of Park and Stankiewiez enrichment broth, Doyle and Roman enrichment broth, and a newly developed enrichment broth for C. jejuni isolation from raw chicken and revealed the highest selectivity potential of Doyle and Roman enrichment broth . Campylobacter spp All the Campylobacter spp. isolating plating media used for food samples are either direct compositions or modified forms of clinical media that were developed for Campylobacter spp. isolation from fecal and clinical samples. Different types of plating media are available for Campylobacter spp. isolation with varying selectivity. These media are divided into two groups such as bloodcontaining solid media known as Campylobacter blood agar plates [Skirrow agar, Campy Brucella agar (Campy-BAP), Butzler agar, and Preston agar], and charcoalbased solid plating media [Karmali agar, and mCCD agar] . Despite poor productivity and sensitivity in food samples, Karmali agar and mCCDA are the best media for Campylobacter spp. isolation as colonies are easily recognizable in both media . Charcoal compounds and blood can reduce the toxic oxygen derivatives to generate a microaerobic environment for Campylobacters growth. Agar plates were also developed without charcoal or blood, demonstrating considerably lower efficacy than charcoal or blood-added broths . The resistance of thermophilic Campylobacter spp. to various antibiotic combinations in media determines its efficacy. Antibiotics such as polymyxin, vancomycin, rifampicin, trimethoprim, cefoperazone, nystatin, cephalothin, cycloheximide, and colistin inhibit background microbiota growth in samples and allow the isolation of slow-growing Campylobacter spp. . The capability of contaminating-flora inhibition differentiates between various media. All the selective agents facilitate the growth of C. coli and C. jejuni . To date, no medium can inhibit C. coli while allowing the growth of C. jejuni or vice versa . Other Campylobacter species ( C. hyointestinalis, C. lari, C. fetus, C. upsaliensis , and C. helveticus ) also grow on most media to some extent, particularly at a less selective temperature of 37°C. Skirrow’s selective agar medium was the first widely recommended for C. coli and C. jejuni isolation from human feces . It replaced the complicated method of selective filtration through 0.65 μm pore-size membranes. Skirrow’s Campylobacter selective agar contains peptone, lysed horse blood (7%), and antibiotics (trimethoprim, vancomycin, and polymyxin B) . The addition of vancomycin (inhibits Gram-positives), trimethoprim (broad-spectrum antibiotic), and polymyxin B (antifungal) mixture enhances its selectivity. The addition of lysed horse blood neutralizes trimethoprim antagonists of basal medium, leading to promoted growth of polymyxins-resistant Gram-negative Proteus spp. The incubation temperature of 42°C also contributes to the medium’s selectivity. Thus, only thermophilic Campylobacter spp. can grow in Skirrow’s medium, whereas the growth of non-thermophilic strains ( C. fetus subsp. fetus ) is restricted at 42°C. Skirrow’s medium is sometimes used for Campylobacter spp. detection in food samples but remains ineffective for many other types of samples . Campylobacter -selective agars contain different antibiotic combinations. Blood-containing Campy Brucella agar plate, also known as Campy-BAP, has been widely used . Campy-BAP is a Brucella base agar that contains five antimicrobial agents (cephalothin, vancomycin, amphotericin B, polymyxin B, and trimethoprim), and is supplemented with sheep blood (10%) . Antibiotics such as polymyxin B, cephalothin, and colistin might inhibit the growth of C. coli, C. jejuni , and C. fetus subsp. fetus . compared various enrichment techniques and direct isolation media to enumerate five C. jejuni strains in stored/refrigerated chicken meat. Campy-BAP agar and blood-free Campylobacter medium exhibited higher C. jejuni strains detection capability than Doyle and Roman enrichment broth and modified Butzler agar . investigated five types of selective media, including Campy-BAP and charcoal cefazolin deoxycholate agar (CCDA). They noted a better recovery rate with the CCDA medium (83%) as compared to the Campy-BAP medium (75%). CCDA medium also effectively suppressed normal enteric flora contamination. Campylobacter spp. colonies, particularly C. coli , appeared atypical on the Campy-BAP medium. The strains mainly produced homogeneous, discrete, and grey colonies, which were difficult to differentiate from coliform colonies in several cases. Campylobacter spp. colonies exhibited transparent and moist growth on other media. The morphology of colonies on Campy-BAP medium complicated the Campylobacter spp. identification process . Preston Campylobacter selective agar was specifically developed for Campylobacter spp. isolation from diverse specimens (environmental, human, and animal) . prepared Preston medium by dissolving nutrient broth in New Zealand agar and adding horse blood (saponin-lysed) and antibiotics (trimethoprim, polymyxin, actidione, and rifampicin). Preston medium demonstrated high Campylobacter spp. isolation rate from all tested samples and remained the most selective medium compared to other media . Campylobacter agar (Butzler’s) is used to isolate Campylobacter species selectively from different specimens, including clinical samples. reported the first selective formulation containing sheep blood agar and five antimicrobials (cephalothin, novobiocin, colistin, actidione, and bacitracin) where bacitracin and cephazolin inhibited the Gram-positive bacteria, and colistin and novobiocin inhibited the Gram-negative enteric flora. Further addition of cycloheximide inhibited the growth of common clinical mycotic contaminants. This medium was developed as an alternative to filtration and culturing on an elective blood-thioglycollate agar medium employed to examine human blood and fecal samples for vibrios . Cephalothin, in addition to the original formula (bacitracin, cycloheximide, novobiocin, and colistin), significantly enhanced its selectivity with the filtration method. Sheep blood agar serves as the basal medium in Butzler’s agar . Initial incubation is carried out at 42°C, and the temperature gradually decrease to grow C. jejuni but hinders the growth of C. etus subspecies intestinalis . Modified charcoal cefoperazone deoxycholate agar (mCCDA) is enlisted in international standard protocols and is widely used worldwide, where it is recommended as the plating media of choice for the detection and enumeration of Campylobacter spp. . It generates satisfactory results and is recommended for selective plating. mCCDA medium is based on the formula and is comprised of New Zealand agar, nutrient broth, bacteriological charcoal, sodium pyruvate, casein hydrolysates, ferrous sulfate, and sodium deoxycholate . The selectivity of this media was further enhanced by replacing cephazolin with cefoperazone . Initially, its development was aimed at thermotolerant Campylobacter spp. isolation from human fecal samples but then emerged as a specified standard medium for Campylobacter spp. isolation from food samples. The blood was replaced with sodium pyruvate, charcoal, and ferrous sulfate in the mCCDA medium, increasing the aerotolerance and growth of Campylobacter spp. Casein hydrolysate in this medium promotes the growth of C. lari environmental strains, whereas sodium deoxycholate and cefoperazone provide the required selectivity . Campylobacter selective mCCDA agar is a widely used blood-free plating medium . Thus, it helps to avoid the disadvantages of blood, such as easy contamination, short life, and expensive nature . The stickiness of Campylobacter spp. colonies to the plate surface in some cases is the only limitation that complicates harvesting . mCCDA and Skirrow media containing different antimicrobials have been used for culturing Campylobacter spp. where cefoperazone in mCCDA media proved a more effective selective agent and efficiently suppressed the enteric flora . The higher efficacy of broad-spectrum cefoperazone (cephalosporin) has been established against Enterobacteriaceae family members and pseudomonads . Karmali is a charcoal-based, blood-free selective medium. It is comprised of Columbia agar base, hematin, activated charcoal, sodium pyruvate, cycloheximide, cefoperazone, and vancomycin . Karmali medium was developed to overcome mCCDA selective agar-associated limitations . demonstrated significantly higher selectivity of Karmali agar and better Campylobacter spp. isolation rate from fecal samples as compared to Skirrow’s medium. Similar to blood, charcoal also acts as a quenching agent for enhanced aerotolerance against oxygen derivatives’ toxicity . Thus, charcoal-based agar is a better alternative for blood-containing agar in developing countries, which face erratic availability of sterile blood. Karmali agar medium contains sodium pyruvate in the selective supplement, whereas it is found in the basal medium of other blood-free Campylobacter spp. isolation media (mCCDA). Ferrous sulphate in mCCDA media is replaced with hemin in the Karmali medium. Vancomycin in the Karmali medium replaces the deoxycholate of mCCDA media and strongly inhibits the growth of Gram-positive microorganisms. Vancomycin is particularly effective against enterococci and thus eliminates bile salts’ inherent variability. Cefoperazone in this medium efficiently suppresses the growth of Pseudomonas spp. whereas cycloheximide more effectively inhibits yeasts than amphotericin B . The three antibioticselective agents (cycloheximide, cefoperazone, and vancomycin) in the Karmali medium efficiently restrict the growth of Gram-negative and Grampositive bacteria, and yeasts. During the development of this media, the efficacy of these antibiotic-selective agents was individually assessed . Karmali medium has been proven more selective than Skirrow medium. Some C. coli strains are cephalosporins-susceptible and the Skirrow medium performs better for isolating these strains than the Karmali medium. During a study, combining Skirrow and Karmali mediums produced near-optimal results for thermotolerant Campylobacter spp. isolation from fecal samples . revealed that charcoal and cefoperazone-containing Campylobacter spp. isolation media (mCCDA) generated better outcomes than earlier formulations. Campylobacter spp Adding chromogenic agar media in isolation protocols enhanced Campylobacter species identification capability through distinctive colony color. Synthetic chromogenic enzyme substrates in chromogenic media aid in their utility as both differential and selective media to identify the target isolate through their enzyme activity . A few commercial chromogenic agar plates are available in Latin America, the USA, and Europe. These plates are utilized to isolate Campylobacter spp. from meat, carcass rinse, environmental samples, and poultry meat. Campylobacter spp. isolating (food samples) sensitivity of chromogenic agars is similar to traditional plates . CHROMagar™ Campylobacter (CHROMagar™, France), CampyFood ® agar (bioMérieux, France), R&F ® Campylobacter media (R&F Products, USA), and Brilliance™ CampyCount Agar (Thermo Scientific™, Thermo Fisher Scientific, Inc., USA) are Campylobacter chromogenic media, facilitating visual recognition of Campylobacter spp. colonies without requiring subsequent culturing and confirmation tests. Thus, these media decrease the analysis’s cost and time . Campy-Food ® agar was the first commercial chromogenic-like agar that matched CCDA capabilities to isolate and enumerate Campylobacter spp. ( C. coli and C. jejuni ) from poultry samples. CampyFood ® agar plating medium was recommended for its high selectivity and better performance than the specificity (68%) and sensitivity (100%) of mCCDA . Moreover, it eliminates or minimizes the contamination of swarming and spreading colonies in tested samples . CampyFood ® agar plates are easy to handle and produce results comparable to those of other media . However, some other bacterial species might also grow on the plates, leading to Campylobacter spp. colonies’ overestimation. Therefore, the CampyFood ® medium is not entirely different for Campylobacter spp. isolation . An investigation in Chile reported a higher CampyFood ® medium-based Campylobacter spp. isolation rate from chicken meat (83%) than mCCDA (67%) . A selective chromogenic medium, Brilliance™ Campy Count agar, was developed explicitly for Campylobacter spp. ( C. coli and C. jejuni ) enumeration from poultry samples. Brilliance™ CampyCount agar is comprised of an amino acid and salt mix that allows accurate, clear, and specific C. coli and C. jejuni enumeration on poultry carcass samples . Brilliance™ CampyCount medium was carefully developed to achieve C. coli and C. jejuni growth while inhibiting the growth of non-target microorganisms. This medium indicates the colonies through a color change to dark red. Thus, all C. jejuni / coli colonies become readily identifiable within 48 hours on a transparent BCC medium . indicated that Brilliance™ CampyCount and CampyFood ® media efficiencies were comparable to mCCDA for Campylobacters spp. enumeration in naturally contaminated chicken meat. Brilliance™ CampyCount agar can be a potential alternative to mCCDA, but further investigation is required to enhance its selectivity for improved accuracy of Campylobacter spp. enumeration and minimum background microflora . CHROMagar™ Campylobacter is a selective chromogenic medium that is widely used for presumptive identification, direct qualitative detection, and differentiation of main thermo-tolerant Campylobacter spp. ( C. lari, C. jejuni , and C. coli ) from environmental and food samples by following the method. CHROMagar™ Campylobacter comprises of a chromogenic substrate, agar, yeast extract, peptones, Sodium chloride, and a selective mix . CHROMagar™ Campylobacter is also a blood-free transparent agar-like CLA-S medium that helps visualize and enumerate Campylobacter spp. colony forming unit (CFU) by producing purple colonies . R&F ® Campylobacter chromogenic agar plating medium targets the C-2 esterase enzyme of C. coli and C. jejuni. C. coli and C. jejuni are C-2 esterase positive, whereas other microorganisms remain negative to this enzyme. R&F ® Campylobacter chromogenic medium has enhanced sensitivity, and visual identification can easily distinguish the colonies. All current Campylobacter spp. isolation broth, media, and plates are modifications of media, which were developed almost three decades ago when achieving microaerobic conditions in the laboratories was challenging . Campylobacter spp. detection and future directions Culture-based techniques are a standard cultivation method for bacterial detection and enumeration of food and water . However, some foodborne and waterborne enteric bacteria, including Campylobacter spp. could enter a viable but non-culturable (VBNC) state that can somehow lose their growing capability on culture media . Despite non-culturability, VBNC cells are not considered dead due to different dissimilarities. A damaged membrane is the main feature of dead cells that cannot retain plasmids and chromosomal DNA, whereas the membrane of VBNC cells remains intact with undamaged DNA and plasmids . Dead cells become inactive metabolically, but VBNC cells remain metabolically active and perform respiration . Gene expressions stop in dead cells and transcription continues in VBNC cells followed by the production of mRNA . VBNC cells, in contrast to dead cells, continue to uptake and incorporate amino acids into proteins . VBNC bacterial cells retain their virulence and cause infection upon entry into hosts. Thus, they are a serious concern for public health, particularly about water and foodborne pathogens . Several studies have reported VBNC C. jejuni colonization in the rat guts, suckling mice, fertilized chicken eggs, and chicks (1-week-old) . successfully used artificial seawater to resuscitate the C. jejuni VBNC cells after 142 days by passing through the mouse intestine. Thus, VBNC C. jejuni could retain the virulence and infectivity. However, the infective capability of environmental VBNC cells without resuscitation remains unclear . An in vitro study has also reported the invasion of Caco-2 human intestinal epithelial cells by VBNC C. jejuni . Disease diagnosis and etiological agents’ identification in clinical, water, and food samples are still highly dependent on culture-based techniques. The inability to culture microorganisms could be a major limitation in disease diagnosis and treatments. This situation complicates pathogen detection in environmental, water, and food samples. Thus, potentially hazardous contaminations could remain undetected, and water and foodborne VBNC bacteria could seriously threaten public health ( ; Pan and Ren, 2023). VBNC state in food and water could be generally attributed to low-grade or aseptic infections and could be mistakenly linked to viruses if no bacteria are detected . Generally, the enrichment step resuscitates VBNC and damaged cells. Therefore, enrichment culture of water and food bacteria in selective/basal broth notably enhances the retrieval of experimentally injured Campylobacter spp. . An enrichment regime involving incubation (4 hours, 37°C) in broth [lysed horse blood (5%), sodium metabisulphite (0.02%), sodium pyruvate (0.02%), and ferrous sulfate (0.05%)] followed by another incubation (44 hours, 43°C) significantly improved damaged Campylobacter spp. cells’ recovery from river water samples . It might have facilitated the repair of injured cells before exposure to high temperatures . Propidium monoazide (PMA)-viability-qPCR approach could successfully detect the natural occurrence of VBNC Campylobacter spp. cells in environmental samples (chicken manure and barn). The study further demonstrated the Campylobacter spp. viability in water and soil up to 63 and 28 days, respectively . PMA-qPCR technique also efficiently detected laboratory-induced C. jejuni VBNC cells in UHT and pasteurized milk . It can also quantify VBNC Campylobacter spp. to provide insights into the unculturable Campylobacter spp. prevalence in agri-food productions and environment . This method offers an effective solution to overcome the limitations of traditional culture-based methods. However, it requires costly apparatus and highly trained personnel during the pre-treatment of samples for a successful VBNC Campylobacter spp. DNA isolation. PCR-based direct Campylobacter spp. cell detection in environmental and food samples is a time-effective approach as compared to culture-based identification and confirmation. Different PCR protocols with diverse primers have been developed for Campylobacter spp. detection in wastewater and water samples . Most PCR-based studies analyzed the Campylobacter spp. absence or presence in samples whereas some studies obtained quantitative results by employing real-time PCR . These protocols facilitated the recovery from Campylobacter spp.-seeded cultures. Direct PCR assay could efficiently detect the natural occurrence of Campylobacter spp. in polluted drinking water without an enrichment step . also employed a direct PCR approach to successfully detect C. jejuni in naturally contaminated water without prior enrichment of samples. PCR-based direct Campylobacter spp. detection in clean drinking water might be feasible. However, it could generate false negative results in samples containing high levels of background bacteria . In contrast to contaminated drinking water samples, Campylobacter spp. presence in milk, poultry, and environmental murky water samples remains comparatively low with high levels of background microbiota and PCR inhibitors. Therefore, the enrichment step becomes mandatory before PCR detection . Similarly, multiple studies have performed enrichment steps before PCR detection of Campylobacter spp. in various types of samples such as river and spiked estuarine water samples , spiked and naturally polluted sewage samples , spiked and natural food contamination samples , samples of natural poultry and human fecal contaminations , spiked chicken rinse water , and murky pond water . Generally, enrichment incubation increases the target cell population for better PCR detection. Direct PCR, multiplex PCR, and qPCR can also amplify the DNA of dead cells and naked DNA fragments in water and food samples without the enrichment step . The presence of dead Campylobacter spp. cells in water samples depict contamination but are no longer harmful to public health . Therefore, the induction of an enrichment step before PCR assay enhances the detection of viable cells. Selective enrichment followed by PCR assay has emerged as a standard method for Campylobacter spp. detection in environmental samples . The FISH (fluorescence in situ hybridization) method differentiates DNA fragments and whole cells. A fluorescent Campylobacter spp.-specific oligonucleotide probe is used to label the whole cells, followed by epifluorescence microscopy. detected C. coli after membrane filtration of spiked tap water and noted hybridized cells with different fluorescence brightness, which helped separate senescent and actively growing C. coli cells. Immunebased assay could be another alternative to traditional culture-based methods. However, immune-assay kits are yet to be validated for Campylobacter spp. detection in food (poultry) samples, possibly due to matrix-induced sensitivity loss . Despite extensive developments and research of alternatives for precise and rapid Campylobacter spp. detection, identification, and quantification in samples (environmental, food, and clinical), and culture-based techniques are still the gold standard. Standard organizations recommend immune-based and molecular approaches for the identification and precise confirmation of presumptive Campylobacter spp. colonies along with conventional confirmatory tests . Campylobacter spp. in food and water Molecular and culture methods of Campylobacter spp. detection present particular advantages and disadvantages over each other, which are discussed below: Detection accuracy and specificity Traditional culturing grows only viable Campylobacter spp. cells via selective media and thus achieves high specificity by restricting the interference of other microorganisms. This approach detects only live pathogens to indicate a current risk of the infection. Molecular techniques (PCR and qPCR) can efficiently detect Campylobacter -specific DNA in complex samples. However, these methods can detect dead and live cells, including nonviable cells’ residual DNA, and potentially yield false positives related to infection risk. Limit of detection and sensitivity The sample’s initial bacterial load and competing flora could alleviate the sensitivity of culture-based methods. The pathogenic VBNC cells of Campylobacter spp. could remain undetected in selective media, which results in the underreporting of infections. On the other hand, Campylobacter spp. DNA can be detected by more sensitive molecular approaches even in background flora-contaminated samples. The sensitivity of PMA-qPCR is even better as it could exclude the dead cells in the sample. Completion time The enrichment, selective plating, and incubation steps lengthen the culture-based methods and take several days to produce results. This limits routine pathogen monitoring, where quick outputs are preferred. Conversely, PCR produces results within hours to help in real-time pathogen monitoring and timely response during outbreaks. Experimental cost Microbial culturing can be performed without costly equipment, so these methods are preferred in less developed laboratories. However, laborious culture-based approaches are time-consuming and require more manual handling. Conversely, expensive reagents and equipment (qPCR and PCR thermocyclers) are required for rapid molecular techniques. Moreover, highly trained personnel are required to perform these procedures, which limits their utility in settings with fewer resources. Strain typing and further analysis The cultured colonies facilitate further analysis through genotyping, biochemical tests, and antibiotic susceptibility testing, a prerequisite for outbreak tracking and epidemiological studies. Molecular approaches (qPCR, DNA sequencing, and multiplex PCR) facilitate precise and rapid identification of pathogens, genetic analysis, and epidemiological tracking. However, further analyses are restricted in this case, as these methods cannot provide live bacterial culture. Efficiency in diverse sample types Culture-based techniques can be applied to diverse types of water and food samples. However, complex food matrices can affect the detection process. Similarly, environmental and food samples can also alleviate the detection efficiency of molecular approaches. Therefore, an increase in bacterial cell numbers via pre-enrichment is often required before analysis. Standardization and regulatory acceptance ISO and FDA have standardized culture-based detection protocols as the gold standard for water and food safety testing. Conversely, rapidly emerging molecular techniques have yet to be universally standardized for Campylobacter spp. detection in water and food samples. Their consistency across laboratories and validation remains a challenge for regulatory acceptance. Briefly, the cost-effective culture-based techniques are highly reliable and essential for live pathogen detection and regulatory compliance. On the other hand, molecular approaches rapidly generate sensitive results and thus are particularly useful for prompt outbreak response. However, high cost and necessary technical assistance restrict their large-scale applicability. Integrating both approaches could provide comprehensive, timely detection of pathogens by balancing the culturebased specificity and speed of molecular methods. Traditional culturing grows only viable Campylobacter spp. cells via selective media and thus achieves high specificity by restricting the interference of other microorganisms. This approach detects only live pathogens to indicate a current risk of the infection. Molecular techniques (PCR and qPCR) can efficiently detect Campylobacter -specific DNA in complex samples. However, these methods can detect dead and live cells, including nonviable cells’ residual DNA, and potentially yield false positives related to infection risk. The sample’s initial bacterial load and competing flora could alleviate the sensitivity of culture-based methods. The pathogenic VBNC cells of Campylobacter spp. could remain undetected in selective media, which results in the underreporting of infections. On the other hand, Campylobacter spp. DNA can be detected by more sensitive molecular approaches even in background flora-contaminated samples. The sensitivity of PMA-qPCR is even better as it could exclude the dead cells in the sample. The enrichment, selective plating, and incubation steps lengthen the culture-based methods and take several days to produce results. This limits routine pathogen monitoring, where quick outputs are preferred. Conversely, PCR produces results within hours to help in real-time pathogen monitoring and timely response during outbreaks. Microbial culturing can be performed without costly equipment, so these methods are preferred in less developed laboratories. However, laborious culture-based approaches are time-consuming and require more manual handling. Conversely, expensive reagents and equipment (qPCR and PCR thermocyclers) are required for rapid molecular techniques. Moreover, highly trained personnel are required to perform these procedures, which limits their utility in settings with fewer resources. The cultured colonies facilitate further analysis through genotyping, biochemical tests, and antibiotic susceptibility testing, a prerequisite for outbreak tracking and epidemiological studies. Molecular approaches (qPCR, DNA sequencing, and multiplex PCR) facilitate precise and rapid identification of pathogens, genetic analysis, and epidemiological tracking. However, further analyses are restricted in this case, as these methods cannot provide live bacterial culture. Culture-based techniques can be applied to diverse types of water and food samples. However, complex food matrices can affect the detection process. Similarly, environmental and food samples can also alleviate the detection efficiency of molecular approaches. Therefore, an increase in bacterial cell numbers via pre-enrichment is often required before analysis. ISO and FDA have standardized culture-based detection protocols as the gold standard for water and food safety testing. Conversely, rapidly emerging molecular techniques have yet to be universally standardized for Campylobacter spp. detection in water and food samples. Their consistency across laboratories and validation remains a challenge for regulatory acceptance. Briefly, the cost-effective culture-based techniques are highly reliable and essential for live pathogen detection and regulatory compliance. On the other hand, molecular approaches rapidly generate sensitive results and thus are particularly useful for prompt outbreak response. However, high cost and necessary technical assistance restrict their large-scale applicability. Integrating both approaches could provide comprehensive, timely detection of pathogens by balancing the culturebased specificity and speed of molecular methods. Campylobacter spp. isolation and detection from water and food sources is necessary for public health as these pathogens are associated with widespread enteric infections. The specificity and regulatory acceptance of ISO, FDA, and PHE-recommended culture methods make them the gold standard for Campylobacter spp. detection. The lengthy procedures and background flora-associated lower sensitivity are major limitations. These can be overcome by adding selective enrichment and plating media steps for the reliable identification of Campylobacter spp. The use of chromogenic media is a cost-effective and rapid approach that yields varying colors in different Campylobacter spp. colonies. However, they should be standardized for consistent results at varying microbial contaminations in foods. The undetected VBNC cells of Campylobacter spp. are a major threat to food safety. The sensitivity of advanced immunological and molecular methods (PCR and PMA-qPCR) to Campylobacter spp. is higher than conventional procedures. However, the need for highly trained personnel and expensive apparatus limits their applications. Integrating conventional culturing and recent molecular techniques is the way forward that could improve the specificity, speed, and sensitivity for robust surveillance of foodborne pathogens. Simultaneously, sustained adaptations and innovations are mandatory for Campylobacter spp. detection to ensure public health safety, particularly in low-resource regions. Standardization and validation of novel methods are necessary for improving Campylobacter spp. monitoring to curb their global infections. |
Canada Goldenrod Invasion Regulates the Effects of Soil Moisture on Soil Respiration | 3687721c-aaae-4f9f-80fb-bbd52f9864a6 | 9741181 | Microbiology[mh] | With economic globalization, alien plant invasion has become an important ecological problem. Numerous studies found that invasive plants typically show high net primary productivity and can change aboveground vegetation community structures, thus affecting the quantity and quality of soil carbon input , and subsequent changes in soil respiration may affect carbon pools (e.g., regarding carbon sequestration and carbon input) [ , , ]. Therefore, plant invasion markedly affects the global carbon cycle. Wetland ecosystems constitute an important carbon pool and play an important role in carbon storage. However, wetlands are fragile ecosystems that are vulnerable to invasion by alien plants. Invasion of the Hangzhou Bay wetlands by Canada goldenrod ( Solidago canadensis L.) considerably reduced the soil pH and changed the organic carbon components , and invasion of coastal wetlands of eastern China by Spartina alterniflora has changed the biodiversity and carbon poverty of coastal wetland ecosystems . Thus, invasion by alien plants affects community diversity, microbial activity, and soil physical and chemical properties of wetland ecosystems . Soil respiration is an important component of the carbon cycle in terrestrial ecosystems, and it is the main CO 2 output from the soil carbon pool to that of the atmosphere. This complex biochemical process includes autotrophic (root respiration and root microbial respiration) and heterotrophic respiration (microbial and animal respiration) , and can be affected by the structure and activity of soil biomes, organic matter, and vegetation, as well as the soil physicochemical characteristics, such as soil temperature, moisture, nutrients, and pH . Among them, soil temperature and moisture are the main factors affecting soil respiration, which in turn is affected by vegetation structure . However, the potential impact of alien plant invasion on soil respiration is so far unclear. Previous studies found inconsistent effects of invasive plants on soil respiration. Invasion of wetlands by Acacia farnesiana (Linn.) Willd. and Cynara cardunculus (L.) increased soil respiration , and invasion of coastal wetlands by Spartina alterniflora was predicted to markedly increase greenhouse gas emissions . However, this invasion by Spartina alterniflora was shown to reduce soil respiration, which was contrary to other observations, likely because soil respiration depends on the content and input quality of soil total organic carbon, the decomposition process of litter, or the fluctuation of the groundwater level in wetlands [ , , , ]. This suggests that the alien plant invasion effects on the soil carbon cycle depend on various biotic and abiotic factors [ , , , , , , ]. The effects of soil moisture on respiration are complex. Under low soil moisture, soil respiration is strongly correlated with soil moisture , whereas soil respiration peaks near field capacity. Beyond a certain threshold, soil respiration may decline, and when soil moisture is saturated, soil respiration stops . Solidago canadensis L.is native to North America and is currently considered one of the most deleterious and pervasive invasive species worldwide . In 1913, S. canadensis was introduced to Shanghai, China as ornamental flowers. Subsequently, S. canadensis spread to natural environment, including the area south of the Yangtze River, and has become one of the most harmful weeds in China . The impact of S. canadensis on soil respiration have rarely been examined . Previous studies have found that the invasion of the riparian wetlands by S. canadensis affected soil respiration and the carbon cycle in the invasion area, and with the intensification of the invasion, soil respiration showed a decreasing trend . This inhibition of soil respiration might be caused by the changes in the underground soil microenvironment and aboveground vegetation community structure under S. canadensis invasion. Invasion by S. canadensis affects autotrophic and heterotrophic respiration, but the contribution of these two components to soil total respiration is unclear. Moreover, soil moisture is important to consider in this regard, however, the specific interaction effect of alien plant invasion and soil moisture conditions on soil respiration remains to be elucidated . Thus, the present study was conducted to investigate the effects of different degrees of S. canadensis invasion on soil respiration under different moisture conditions. We hypothesized that (1) S. canadensis invasion would inhibit soil respiration, as well as all components of soil respiration (autotrophic and heterotrophic respiration); and (2) the effect of invasion on these three types of respiration would depend on soil moisture.
2.1. Experimental Design The experiment was conducted in a nursery at Jiangsu University (32°12′ N, 119°30′ E), Zhenjiang, China. S. canadensis invasion in a riparian wetland habitat (32°14′ N, 119°29′ E) of Zhenjiang was simulated. The originally predominant plant in this riparian wetland habitat was common reed ( Phragmites australis (Cav.) Trin. ex Steud); however, this habitat has been invaded by S. canadensis in recent years. Seeds of P. australis and S. canadensis were collected from a riparian wetland in December 2018. To preclude the S. canadensis invasion effects on soil characteristics, soil was collected from a non- S. canadensis invaded green space at the campus of Jiangsu University. The collected soils were sieved to remove stones and visible plant debris, and then placed in plastic pots (height: 26.5 cm, top diameter: 24.5 cm, and bottom diameter: 19.5 cm). After 2 months cultivation, similar size seedlings of P. australis and S. canadensis were carefully transplanted to pots in June 2019. Invasion by S. canadensis was simulated by substituting space for time, i.e., different ratios of P. australis to S. canadensis were used to represent five successive stages of S. canadensis invasion, including the non-invasive (NI), early invasive (EI), intermediate invasive (II), dominant invasive (DI), and completely invasive (CI) stages, which have been previously described in detail . Four seedlings were planted per pot, and different soil moisture conditions were simulated using water tanks. The pots were placed in tanks with different water levels, i.e., high, intermediate, and low, which were three-quarters, half, and one-quarter of the pot height, respectively. During the experiment period, the soil moisture among three water level treatments were significant difference ( p < 0.01; ). All water tanks containing pots were placed in a nursery under natural light, and water was replenished every two days. The experiment was executed using a complete factorial design with three replicates (45 pots in total). 2.2. Soil, Autotrophic, and Heterotrophic Respiration Measurements Soil respirations, including heterotrophic and autotrophic respiration, were measured between 08:00 AM and 10:00 AM on the first and fifteenth day of the month, from 15 July to 15 December 2019, by using a closed-chamber system. The introduction of the closed-chamber system and the method of soil respiration determination were described in detail in the previous study . In brief, the chamber was directly inserted into the soil after removing the weeds, and the carbon dioxide content in the chamber was recorded every 5 s for 300 s. The topsoil (0–10 cm) temperature and moisture were measured using a soil temperature and moisture measurer (TR-6D, Shunkeda Technology Co., Ltd., Beijing, China). The root exclusion method, based on inserting deep gauze collars (diameter: 5.0 cm and height: 30 cm) at the same place of pots, was drawn upon to distinguish soil total respiration into heterotrophic and autotrophic respiration. Soil total, heterotrophic, and autotrophic respiration were calculated following the equations described in the previous study [ , , ]. 2.3. Soil and Vegetation Sample Collection and Preparation Soil samples were collected by mixing topsoil (0–10 cm) obtained from different points along an X-shaped pattern in non-invasive, intermediate invasive, and completely invasive stage treatment pots among all water level treatments on 15 June 2019 (experiment start date) and 15 December 2019 (experiment end date). Each soil sample was divided into two parts, one of which was stored at 4 °C until soil microbial biomass and extracellular enzymatic activity analyses were performed, and the other was air-dried for soil characteristics analyses. Plants were harvested on 15 December 2019. The harvested plants were weighed after 72 h oven-drying at 65 °C. Total biomass and root biomass in the non-invasive, intermediate invasive, and completely invasive stage treatment pots were calculated as the dry mass of all plant per pot. 2.4. Soil Characteristics Soil pH was quantified in soil suspensions at a ratio of 1:5 (air-dried soil weight to deionized water volume). Soil dissolved organic carbon and nitrogen content were quantified using a total organic carbon analyzer with nitrogen module (Shimadzu TOC-L, Kyoto, Japan) . Soil nitric nitrogen content was quantified following the colorimetric method . Soil total carbon and soil total nitrogen content was quantified using an elemental analyzer (Vario MACRO; Elementar Analysensysteme GmbH, Langensebold, Germany). Soil total phosphorus content was measured using the molybdate colorimetry method . Soil microbial biomass carbon, nitrogen, and phosphorus content was quantified using the chloroform fumigation extraction method . Soil microbial community diversity (H’) was quantified using a BIOLOG EcoPlate (Biolog Inc., Hayward, Berkeley Heights, NJ, USA) following the measurement procedure described previously . Extracellular activity of carbon-acquiring enzymes (β-D-1,4-cellobiohydrolase, β-1,4-xylosidase, and β-1,4-glucosidase), that of nitrogen-acquiring enzymes (L-leucine aminopeptidase and 1,4-N-acetylglucosaminidase), and that of the phosphorus-acquiring enzyme phosphatase were quantified following the procedure described by DeForest (2009) . Microbial energy limitation (VL), microbial nutrient limitation (VA), and microbial carbon utilization (CUE) were calculated according to Cui et al. (2020) . 2.5. Statistical Analyses Two-way analysis of variance (ANOVA) was used to test the individual and interaction effects of S. canadensis invasion and water level on soil total, autotrophic, and heterotrophic respiration and their respective effects on soil characteristics. The variations of soil characteristics between start date sampling and end date sampling were used. Analysis of covariance and linear regression tests were performed to determine the univariate relationship between soil respiration and soil moisture for each degree of S. canadensis invasion. Partial least squares path modeling (PLS-PM) was used to test the possible pathways by which the factors impact heterotrophic and autotrophic respiration of soil. All tests were executed using SAS version 9.4 (SAS Institute, Cary, NC, USA), except for the PLS-PM which was executed using Amos in IBM SPSS (version 24.0; SPSS Inc., Chicago, IL, USA).
The experiment was conducted in a nursery at Jiangsu University (32°12′ N, 119°30′ E), Zhenjiang, China. S. canadensis invasion in a riparian wetland habitat (32°14′ N, 119°29′ E) of Zhenjiang was simulated. The originally predominant plant in this riparian wetland habitat was common reed ( Phragmites australis (Cav.) Trin. ex Steud); however, this habitat has been invaded by S. canadensis in recent years. Seeds of P. australis and S. canadensis were collected from a riparian wetland in December 2018. To preclude the S. canadensis invasion effects on soil characteristics, soil was collected from a non- S. canadensis invaded green space at the campus of Jiangsu University. The collected soils were sieved to remove stones and visible plant debris, and then placed in plastic pots (height: 26.5 cm, top diameter: 24.5 cm, and bottom diameter: 19.5 cm). After 2 months cultivation, similar size seedlings of P. australis and S. canadensis were carefully transplanted to pots in June 2019. Invasion by S. canadensis was simulated by substituting space for time, i.e., different ratios of P. australis to S. canadensis were used to represent five successive stages of S. canadensis invasion, including the non-invasive (NI), early invasive (EI), intermediate invasive (II), dominant invasive (DI), and completely invasive (CI) stages, which have been previously described in detail . Four seedlings were planted per pot, and different soil moisture conditions were simulated using water tanks. The pots were placed in tanks with different water levels, i.e., high, intermediate, and low, which were three-quarters, half, and one-quarter of the pot height, respectively. During the experiment period, the soil moisture among three water level treatments were significant difference ( p < 0.01; ). All water tanks containing pots were placed in a nursery under natural light, and water was replenished every two days. The experiment was executed using a complete factorial design with three replicates (45 pots in total).
Soil respirations, including heterotrophic and autotrophic respiration, were measured between 08:00 AM and 10:00 AM on the first and fifteenth day of the month, from 15 July to 15 December 2019, by using a closed-chamber system. The introduction of the closed-chamber system and the method of soil respiration determination were described in detail in the previous study . In brief, the chamber was directly inserted into the soil after removing the weeds, and the carbon dioxide content in the chamber was recorded every 5 s for 300 s. The topsoil (0–10 cm) temperature and moisture were measured using a soil temperature and moisture measurer (TR-6D, Shunkeda Technology Co., Ltd., Beijing, China). The root exclusion method, based on inserting deep gauze collars (diameter: 5.0 cm and height: 30 cm) at the same place of pots, was drawn upon to distinguish soil total respiration into heterotrophic and autotrophic respiration. Soil total, heterotrophic, and autotrophic respiration were calculated following the equations described in the previous study [ , , ].
Soil samples were collected by mixing topsoil (0–10 cm) obtained from different points along an X-shaped pattern in non-invasive, intermediate invasive, and completely invasive stage treatment pots among all water level treatments on 15 June 2019 (experiment start date) and 15 December 2019 (experiment end date). Each soil sample was divided into two parts, one of which was stored at 4 °C until soil microbial biomass and extracellular enzymatic activity analyses were performed, and the other was air-dried for soil characteristics analyses. Plants were harvested on 15 December 2019. The harvested plants were weighed after 72 h oven-drying at 65 °C. Total biomass and root biomass in the non-invasive, intermediate invasive, and completely invasive stage treatment pots were calculated as the dry mass of all plant per pot.
Soil pH was quantified in soil suspensions at a ratio of 1:5 (air-dried soil weight to deionized water volume). Soil dissolved organic carbon and nitrogen content were quantified using a total organic carbon analyzer with nitrogen module (Shimadzu TOC-L, Kyoto, Japan) . Soil nitric nitrogen content was quantified following the colorimetric method . Soil total carbon and soil total nitrogen content was quantified using an elemental analyzer (Vario MACRO; Elementar Analysensysteme GmbH, Langensebold, Germany). Soil total phosphorus content was measured using the molybdate colorimetry method . Soil microbial biomass carbon, nitrogen, and phosphorus content was quantified using the chloroform fumigation extraction method . Soil microbial community diversity (H’) was quantified using a BIOLOG EcoPlate (Biolog Inc., Hayward, Berkeley Heights, NJ, USA) following the measurement procedure described previously . Extracellular activity of carbon-acquiring enzymes (β-D-1,4-cellobiohydrolase, β-1,4-xylosidase, and β-1,4-glucosidase), that of nitrogen-acquiring enzymes (L-leucine aminopeptidase and 1,4-N-acetylglucosaminidase), and that of the phosphorus-acquiring enzyme phosphatase were quantified following the procedure described by DeForest (2009) . Microbial energy limitation (VL), microbial nutrient limitation (VA), and microbial carbon utilization (CUE) were calculated according to Cui et al. (2020) .
Two-way analysis of variance (ANOVA) was used to test the individual and interaction effects of S. canadensis invasion and water level on soil total, autotrophic, and heterotrophic respiration and their respective effects on soil characteristics. The variations of soil characteristics between start date sampling and end date sampling were used. Analysis of covariance and linear regression tests were performed to determine the univariate relationship between soil respiration and soil moisture for each degree of S. canadensis invasion. Partial least squares path modeling (PLS-PM) was used to test the possible pathways by which the factors impact heterotrophic and autotrophic respiration of soil. All tests were executed using SAS version 9.4 (SAS Institute, Cary, NC, USA), except for the PLS-PM which was executed using Amos in IBM SPSS (version 24.0; SPSS Inc., Chicago, IL, USA).
3.1. Effects of Treatments on Soil Total, Heterotrophic, and Autotrophic Respiration Solidago canadensis L. invasion significantly affected soil total, heterotrophic, and autotrophic respiration ( p < 0.01, each), and water level significantly affected soil total ( p < 0.05) and autotrophic respiration ( p < 0.05). However, no interaction effects were observed ( ). In general, the responses of soil total, heterotrophic, and autotrophic respiration to S. canadensis invasion showed similar trends under different water levels. With increasing S. canadensis invasion (from NI to CI), soil autotrophic respiration first decreased and then increased, whereas total soil respiration and autotrophic respiration showed a decreasing trend and were the lowest at the II stage. In particular, total soil respiration was lower than the initial value, whereas autotrophic respiration was disparate. Compared with the NI (only P. australis existence) treatment, at low water levels, S. canadensis growth reduced soil total respiration and heterotrophic respiration in the EI, II, DI, and CI stages by 14.3% and 18.0%, 31.7% and 32.7%, 13.1% and 33.6%, 8.5%, and 34.9%, respectively; soil autotrophic respiration was reduced by 4.7% and 32.0% in the EI and II treatments, but was increased by 36.2% and 60.6% in the DI and CI treatments, respectively ( p < 0.01, each). At intermediate water levels, S. canadensis growth reduced soil total respiration and heterotrophic respiration in the EI, II, DI, and CI treatments by 19.2% and 12.7%, 34.2% and 30.6%, 24.9% and 29.7%, 14.8%, and 35.8%, respectively; soil autotrophic respiration in the EI, II, and DI treatments was reduced by 31.7%, 41.2%, and 15.7%, respectively, but was increased by 25.8% in the CI treatment ( p < 0.01, each). At high water levels, S. canadensis growth reduced the soil total respiration and heterotrophic respiration in the EI, II, DI, and CI treatments by 9.7% and 18.8%, 28.8% and 22.0%, 6.3% and 22.0%, 7.5%, and 35.1%, respectively; soil autotrophic respiration was reduced by 19.5% and 42.4% in the EI and II treatments, but was increased by 18.2% and 36.8% in the DI and CI treatments, respectively ( p < 0.01, each; ). 3.2. Correlations of Soil Total, Heterotrophic, and Autotrophic Respiration with Soil Characteristics and Plant Biomass Soil moisture was negatively correlated with soil total, heterotrophic, and autotrophic respiration ( p < 0.01, each). Covariance analysis showed an interaction effect of S. canadensis invasion and soil moisture on soil total respiration ( ). S. canadensis invasion altered the fitting curve between total respiration and moisture. With increasing invasion of S. canadensis , the adverse effect of increased soil moisture on soil total respiration decreased ( ). PLS-PM showed that S. canadensis invasion and water level affected soil nutrient availability, microbial characteristics, and plant biomass and ultimately affected soil total, heterotrophic, and autotrophic respiration. The total effects of plant root biomass (−0.418), CUE (−0.269), and alkaline phosphatase activity (−0.497) on soil total, heterotrophic, and autotrophic respiration were the highest. In addition, the direct driving factors differed between soil total, heterotrophic, and autotrophic respiration: root biomass and CUE were the direct driving factors of total soil respiration; CUE was a direct driving factor of soil heterotrophic respiration, whereas alkaline phosphatase activity and root biomass were those of soil autotrophic respiration ( ).
Solidago canadensis L. invasion significantly affected soil total, heterotrophic, and autotrophic respiration ( p < 0.01, each), and water level significantly affected soil total ( p < 0.05) and autotrophic respiration ( p < 0.05). However, no interaction effects were observed ( ). In general, the responses of soil total, heterotrophic, and autotrophic respiration to S. canadensis invasion showed similar trends under different water levels. With increasing S. canadensis invasion (from NI to CI), soil autotrophic respiration first decreased and then increased, whereas total soil respiration and autotrophic respiration showed a decreasing trend and were the lowest at the II stage. In particular, total soil respiration was lower than the initial value, whereas autotrophic respiration was disparate. Compared with the NI (only P. australis existence) treatment, at low water levels, S. canadensis growth reduced soil total respiration and heterotrophic respiration in the EI, II, DI, and CI stages by 14.3% and 18.0%, 31.7% and 32.7%, 13.1% and 33.6%, 8.5%, and 34.9%, respectively; soil autotrophic respiration was reduced by 4.7% and 32.0% in the EI and II treatments, but was increased by 36.2% and 60.6% in the DI and CI treatments, respectively ( p < 0.01, each). At intermediate water levels, S. canadensis growth reduced soil total respiration and heterotrophic respiration in the EI, II, DI, and CI treatments by 19.2% and 12.7%, 34.2% and 30.6%, 24.9% and 29.7%, 14.8%, and 35.8%, respectively; soil autotrophic respiration in the EI, II, and DI treatments was reduced by 31.7%, 41.2%, and 15.7%, respectively, but was increased by 25.8% in the CI treatment ( p < 0.01, each). At high water levels, S. canadensis growth reduced the soil total respiration and heterotrophic respiration in the EI, II, DI, and CI treatments by 9.7% and 18.8%, 28.8% and 22.0%, 6.3% and 22.0%, 7.5%, and 35.1%, respectively; soil autotrophic respiration was reduced by 19.5% and 42.4% in the EI and II treatments, but was increased by 18.2% and 36.8% in the DI and CI treatments, respectively ( p < 0.01, each; ).
Soil moisture was negatively correlated with soil total, heterotrophic, and autotrophic respiration ( p < 0.01, each). Covariance analysis showed an interaction effect of S. canadensis invasion and soil moisture on soil total respiration ( ). S. canadensis invasion altered the fitting curve between total respiration and moisture. With increasing invasion of S. canadensis , the adverse effect of increased soil moisture on soil total respiration decreased ( ). PLS-PM showed that S. canadensis invasion and water level affected soil nutrient availability, microbial characteristics, and plant biomass and ultimately affected soil total, heterotrophic, and autotrophic respiration. The total effects of plant root biomass (−0.418), CUE (−0.269), and alkaline phosphatase activity (−0.497) on soil total, heterotrophic, and autotrophic respiration were the highest. In addition, the direct driving factors differed between soil total, heterotrophic, and autotrophic respiration: root biomass and CUE were the direct driving factors of total soil respiration; CUE was a direct driving factor of soil heterotrophic respiration, whereas alkaline phosphatase activity and root biomass were those of soil autotrophic respiration ( ).
4.1. Effects of S. Canadensis Invasion on Soil Total, Heterotrophic, and Autotrophic Respiration Compared with native plants (P. australis) , alien invasion plants generally demonstrate high growth rates, high biomass production, and strong reproductive capacity, which affect the quality and quantity of soil carbon input. Therefore, we initially hypothesized that S. canadensis invasion would promote soil respiration. However, the results showed that invasion by S. canadensis reduced soil respiration, but its effects on soil total, heterotrophic, and autotrophic respiration were different. At the early stage of invasion by S. canadensis , soil total, heterotrophic, and autotrophic respiration were inhibited, whereas at the DI stage, autotrophic respiration was promoted. In particular, autotrophic respiration was higher at the CI than at the EI stage ( ). These results confirm the original hypotheses. Solidago canadensis L. was found to have different driving mechanisms for soil total, autotrophic, and heterotrophic respiration . In detail, S. canadensis can affect soil total respiration by affecting the root system of the invaded vegetation community and the composition and structure of the microbial community through changing the availability of soil substrates. S. canadensis invasion alters autotrophic respiration by interfering with root nutrient absorption and utilization and limiting root physiological activities . Moreover, S. canadensis alters heterotrophic respiration by affecting the microbial decomposition of soil organic matter and litter through changing the composition, structure, and metabolic activity of the soil microbial community . Allelochemicals including α-pinene, limonene, and germacrene, which are released by S. canadensis affect the availability of soil substrates . However, these complex compounds also affect the soil microbial community structure, microbial metabolic limitations, and microbial nutrient utilization, which in turn alters microbial respiration . This was confirmed by the observed changes in H’ ( p < 0.01) CUE ( p < 0.05) in the present study ( ). PLS-PM revealed that S. canadensis invasion affected the soil microbial community and carbon availability by reducing carbon-related substrate availability, thereby inhibiting the activities of extracellular enzymes and microbial metabolism. The inhibiting effect may further force microbes to improve the utilization of carbon and reduce the release of CO 2 , which consequently suppresses microbial respiration ( ). Therefore, the S. canadensis invasion effects on soil total, heterotrophic, and autotrophic respiration may be caused by changes in soil substrate availability. 4.2. Effects of Water Level on Soil Total, Heterotrophic, and Autotrophic Respiration Water levels produced disparate effects on soil respiration under different degrees of S. canadensis invasion. Soil respiration decreased with increasing water levels ( ). The water level effect on soil respiration was attributed to differences in vegetation biomass and growth rate, soil microbial activity, and soil nutrient availability. The effects of soil moisture on soil respiration are complex, and it is generally believed that there is a threshold for the effect of soil moisture on soil respiration, which typically depends on field capacity. Generally, the soil moisture effect on soil respiration can be divided into three situations: (1) below field capacity, soil respiration is positively correlated with soil moisture; (2) within a certain range, there is no pronounced relationship between soil respiration and moisture; and (3) above field capacity, soil respiration is negatively correlated with soil moisture. In the present study, a riparian wetland environment was simulated with soil moisture exceeding typical field capacity. Therefore, soil respiration showed a negative response to increased soil moisture. The soil moisture effect on soil respiration was mainly reflected in three aspects. First, soil moisture is necessary for soil microbial activity and plant root photosynthesis. Under low or high soil moisture, protective mechanisms of plants and soil microbes may help mitigate adverse effects. For example, when soil moisture is too low, soil microbes transfer energy to produce appropriate nutrient solutes (carbon fixation), thus preventing adverse effects on plants and soil microbes. During this process, the release of CO 2 decreases, thereby affecting soil respiration . Second, soil moisture directly regulates the permeability of soil pores and oxygen . The content and diffusivity of oxygen are reduced as soil moisture increases, thereby directly affecting the respiration of soil microbes and plant roots . Excessive soil moisture inhibits the diffusion of oxygen in the soil, consequently limiting the growth of plant roots and reducing autotrophic soil respiration (root respiration) . Meanwhile, the activity of aerobic microbes is also reduced, consequently affecting the decomposition of soil organic matter and the nutrient utilization pattern of microbes. These changes in soil microbes inhibit heterotrophic respiration (microbial respiration). The higher soil moisture in the current study affected soil autotrophic respiration and soil heterotrophic respiration by affecting plant roots and soil microbes. Third, soil moisture can change soil DOC, which is the main source of soil microbial activity energy. The enhancement of soil moisture can facilitate the diffusion of soluble organic carbon in the soil, which is convenient for microbes to absorb and utilize, thereby promoting microbial respiration . However, these previous results are contrary to those of the present study. One possible explanation is that the promotion effect of higher moisture on the diffusion of DOC in the soil is reduced when a threshold of field water volume is exceeded ( ) . Soil moisture was previously suggested to affect the soil respiration process by altering the soil pH and soluble substance concentrations . In the present study, changes in soil moisture reduced the soil pH and further affected plant growth (total biomass and root biomass) and the community composition and carbon use efficiency of microbes. In addition, changes in soil moisture also reduced the concentration of dissolved organic matter and affected the stoichiometric balance of soil nutrients ( ). These alterations in plant and soil microbes affect soil respiration. 4.3. S. canadensis Invasion Affects Soil Respiration Responses to Soil Moisture In the present study, soil moisture was significantly negatively correlated with soil total, heterotrophic, and autotrophic respiration, and S. canadensis invasion altered the negative correlation between soil moisture and soil respiration ( ; ). The invasion process of S. canadensis reduces the negative effect of the increase in soil moisture content on soil respiration, and complete invasion may gradually restore the adverse effects of increased soil moisture on soil respiration. It is possible that interspecific competition between native and invasive species intensifies the restriction of substrate availability on soil respiration when water is relatively abundant and reduces the negative effect of water-related factors to a certain extent . This indicates that the impact of invasion by S. canadensis on soil respiration not only occurs through changes in substrate availability but also has a subsequent impact by affecting soil moisture. With S. canadensis invasion, soil autotrophic respiration first decreased and then increased under each water level, and soil total respiration and heterotrophic respiration showed a continuous decreasing trend ( ). In terrestrial ecosystems, soil respiration increases with increasing soil moisture , peaking near field capacity. This was inconsistent with the results of the present study, which may be because this study simulated a nearshore wetland system. Soil moisture was thus high (weight moisture content >30%) and exceeded field capacity, resulting in reduced soil respiration. Soil autotrophic respiration represents the carbon flux produced by plant roots, mycorrhiza, and rhizosphere microorganisms . As the invasion of S. canadensis increases net primary productivity, more carbon is distributed to aboveground plant organs, soil nutrients are limited, and autotrophic respiration is reduced. A previous study showed that autotrophic respiration is stimulated due to increasing soil water by increasing the carbon substrate supply and improving soil nutrient availability . Under S. canadensis invasion, the aerenchyma of the root system expands. Under the influence of soil moisture, it is necessary to balance plant growth, nutrient and water absorption, and hormones, and high soil moisture conditions promote respiration of the plant root system . This may explain why soil autotrophic respiration showed a decreasing trend first and then an increasing trend.
Compared with native plants (P. australis) , alien invasion plants generally demonstrate high growth rates, high biomass production, and strong reproductive capacity, which affect the quality and quantity of soil carbon input. Therefore, we initially hypothesized that S. canadensis invasion would promote soil respiration. However, the results showed that invasion by S. canadensis reduced soil respiration, but its effects on soil total, heterotrophic, and autotrophic respiration were different. At the early stage of invasion by S. canadensis , soil total, heterotrophic, and autotrophic respiration were inhibited, whereas at the DI stage, autotrophic respiration was promoted. In particular, autotrophic respiration was higher at the CI than at the EI stage ( ). These results confirm the original hypotheses. Solidago canadensis L. was found to have different driving mechanisms for soil total, autotrophic, and heterotrophic respiration . In detail, S. canadensis can affect soil total respiration by affecting the root system of the invaded vegetation community and the composition and structure of the microbial community through changing the availability of soil substrates. S. canadensis invasion alters autotrophic respiration by interfering with root nutrient absorption and utilization and limiting root physiological activities . Moreover, S. canadensis alters heterotrophic respiration by affecting the microbial decomposition of soil organic matter and litter through changing the composition, structure, and metabolic activity of the soil microbial community . Allelochemicals including α-pinene, limonene, and germacrene, which are released by S. canadensis affect the availability of soil substrates . However, these complex compounds also affect the soil microbial community structure, microbial metabolic limitations, and microbial nutrient utilization, which in turn alters microbial respiration . This was confirmed by the observed changes in H’ ( p < 0.01) CUE ( p < 0.05) in the present study ( ). PLS-PM revealed that S. canadensis invasion affected the soil microbial community and carbon availability by reducing carbon-related substrate availability, thereby inhibiting the activities of extracellular enzymes and microbial metabolism. The inhibiting effect may further force microbes to improve the utilization of carbon and reduce the release of CO 2 , which consequently suppresses microbial respiration ( ). Therefore, the S. canadensis invasion effects on soil total, heterotrophic, and autotrophic respiration may be caused by changes in soil substrate availability.
Water levels produced disparate effects on soil respiration under different degrees of S. canadensis invasion. Soil respiration decreased with increasing water levels ( ). The water level effect on soil respiration was attributed to differences in vegetation biomass and growth rate, soil microbial activity, and soil nutrient availability. The effects of soil moisture on soil respiration are complex, and it is generally believed that there is a threshold for the effect of soil moisture on soil respiration, which typically depends on field capacity. Generally, the soil moisture effect on soil respiration can be divided into three situations: (1) below field capacity, soil respiration is positively correlated with soil moisture; (2) within a certain range, there is no pronounced relationship between soil respiration and moisture; and (3) above field capacity, soil respiration is negatively correlated with soil moisture. In the present study, a riparian wetland environment was simulated with soil moisture exceeding typical field capacity. Therefore, soil respiration showed a negative response to increased soil moisture. The soil moisture effect on soil respiration was mainly reflected in three aspects. First, soil moisture is necessary for soil microbial activity and plant root photosynthesis. Under low or high soil moisture, protective mechanisms of plants and soil microbes may help mitigate adverse effects. For example, when soil moisture is too low, soil microbes transfer energy to produce appropriate nutrient solutes (carbon fixation), thus preventing adverse effects on plants and soil microbes. During this process, the release of CO 2 decreases, thereby affecting soil respiration . Second, soil moisture directly regulates the permeability of soil pores and oxygen . The content and diffusivity of oxygen are reduced as soil moisture increases, thereby directly affecting the respiration of soil microbes and plant roots . Excessive soil moisture inhibits the diffusion of oxygen in the soil, consequently limiting the growth of plant roots and reducing autotrophic soil respiration (root respiration) . Meanwhile, the activity of aerobic microbes is also reduced, consequently affecting the decomposition of soil organic matter and the nutrient utilization pattern of microbes. These changes in soil microbes inhibit heterotrophic respiration (microbial respiration). The higher soil moisture in the current study affected soil autotrophic respiration and soil heterotrophic respiration by affecting plant roots and soil microbes. Third, soil moisture can change soil DOC, which is the main source of soil microbial activity energy. The enhancement of soil moisture can facilitate the diffusion of soluble organic carbon in the soil, which is convenient for microbes to absorb and utilize, thereby promoting microbial respiration . However, these previous results are contrary to those of the present study. One possible explanation is that the promotion effect of higher moisture on the diffusion of DOC in the soil is reduced when a threshold of field water volume is exceeded ( ) . Soil moisture was previously suggested to affect the soil respiration process by altering the soil pH and soluble substance concentrations . In the present study, changes in soil moisture reduced the soil pH and further affected plant growth (total biomass and root biomass) and the community composition and carbon use efficiency of microbes. In addition, changes in soil moisture also reduced the concentration of dissolved organic matter and affected the stoichiometric balance of soil nutrients ( ). These alterations in plant and soil microbes affect soil respiration.
In the present study, soil moisture was significantly negatively correlated with soil total, heterotrophic, and autotrophic respiration, and S. canadensis invasion altered the negative correlation between soil moisture and soil respiration ( ; ). The invasion process of S. canadensis reduces the negative effect of the increase in soil moisture content on soil respiration, and complete invasion may gradually restore the adverse effects of increased soil moisture on soil respiration. It is possible that interspecific competition between native and invasive species intensifies the restriction of substrate availability on soil respiration when water is relatively abundant and reduces the negative effect of water-related factors to a certain extent . This indicates that the impact of invasion by S. canadensis on soil respiration not only occurs through changes in substrate availability but also has a subsequent impact by affecting soil moisture. With S. canadensis invasion, soil autotrophic respiration first decreased and then increased under each water level, and soil total respiration and heterotrophic respiration showed a continuous decreasing trend ( ). In terrestrial ecosystems, soil respiration increases with increasing soil moisture , peaking near field capacity. This was inconsistent with the results of the present study, which may be because this study simulated a nearshore wetland system. Soil moisture was thus high (weight moisture content >30%) and exceeded field capacity, resulting in reduced soil respiration. Soil autotrophic respiration represents the carbon flux produced by plant roots, mycorrhiza, and rhizosphere microorganisms . As the invasion of S. canadensis increases net primary productivity, more carbon is distributed to aboveground plant organs, soil nutrients are limited, and autotrophic respiration is reduced. A previous study showed that autotrophic respiration is stimulated due to increasing soil water by increasing the carbon substrate supply and improving soil nutrient availability . Under S. canadensis invasion, the aerenchyma of the root system expands. Under the influence of soil moisture, it is necessary to balance plant growth, nutrient and water absorption, and hormones, and high soil moisture conditions promote respiration of the plant root system . This may explain why soil autotrophic respiration showed a decreasing trend first and then an increasing trend.
Solidago canadensis L. invasion significantly reduced soil respiration, and the inhibitory effect on autotrophic respiration was stronger than that on heterotrophic respiration. Water levels affected soil total respiration and autotrophic respiration. The changes in soil respiration may be related to the alteration in the effective substrate of the soil substrate induced by the invasion of S. canadensis and the fluctuation in moisture conditions. The change in soil substrate availability may not only affect the uptake and utilization of nutrients by plants and root physiological activities, but also affect soil heterotrophic respiration. As soil moisture can be used as a solvent and mobile carrier of soil nutrients, it could regulate the response of soil respiration to the invasion of S. canadensis . This study provides a reference for predicting the dynamics of the carbon cycle during the invasion process of S. canadensis and a scientific basis for the sustainable development and management of riparian wetlands invaded by alien plants.
|
A Systematic Review Exploring Empirical Pharmacogenomics Research Within Global Indigenous Populations | 25d5d1de-6e6a-4a6a-b86f-16cfc6eb40ca | 11494250 | Pharmacology[mh] | Introduction Precision medicine has generated considerable hope of beneficial clinical outcomes in its scope and its potential to transform medicine and healthcare, ultimately improving population health (Bayer and Galea ; Collins and Varmus ). A steady stream of new discoveries linking genes and single‐nucleotide polymorphisms (SNPs) to disease risk or drug responses have paved the way for advances in genomic medicine. However, despite this wealth of genomic knowledge and consequent clinical benefits, equitable clinical research, provision of genomic treatment services and culturally safe, acceptable genomic diagnostic tools, the treatment options for global Indigenous populations have remained limited. The ability to harness precision medicine approaches to address ongoing health disparities within Indigenous populations is needed, but it requires research using culturally respectful approaches with Indigenous guidance to select appropriate methods to understand the true nature of the genetic variation of Indigenous populations. The diversity of ethnicity, race, and ancestry in the way that genetic knowledge is discovered, classified, and applied considerably limits efforts to achieve health equity and eliminate health disparities for Indigenous people. The multidimensional nature of a person's identity, life experiences, and exposure to social determinants of health are not reflected within most large national genomics datasets, thus further limiting efforts to advance equitable genomic research (Landry et al. ; Bonham, Green, and Pérez‐Stable ). Inadequate sampling and representation of genetic diversity are widely recognized as a significant bias inherent in genomic databases, which has direct implications for the healthcare available to minority populations (Perera ). There are concerns that those who continue to experience the greatest health disparities are benefitting the least from these progressive scientific discoveries. Thus, the challenge is to ensure that the option to engage in genomic research and clinical care is rightfully distributed among many population groups, and that these databases and clinical testing consider genetic diversity to achieve meaningful outcomes. The ability of Indigenous peoples to have the same access to genomic tools for diagnosis and the capacity to have a choice in this scientific space is therefore crucial. In this era of precision medicine, genetic variation is used to predict drug responses, translating genomic medicine into direct applicability in real‐time patient care through a sophisticated “bench‐to‐bedside” pathway (Singh ). The scope for health improvement using medical advancements such as pharmacogenomics, which is the study of DNA variations to develop personalized treatment approaches (Roden et al. ) through the analysis of genetically distinct populations (Nagaraj and Toombs ) is becoming increasingly advanced. Even so, the possibility of exacerbating existing inequities and health disparities that Indigenous people experience is a striking reality, with a glaring “genomic gap” (Jaya Shankar et al. ) becoming increasingly evident. As personalized medicine and treatment focus attention on prevention research and the exploration of targeted therapies, pharmaceutical options, and public health strategies (Singh ), it is vital that efforts be made to improve the inclusion and involvement of Indigenous people. However, for a range of valid reasons, including ethical and privacy‐related concerns, Indigenous people globally have consistently resisted genetic research (Taitingfong et al. ). Initiatives aimed at improving the cultural appropriateness, availability, access, and adaptability of genetic technologies and genomic research remain limited (Garrison, Hudson, et al. ). Concerns from Indigenous peoples surrounding a lack of co‐design or engagement (Behring et al. ), inadequate informed consent (Boyer et al. ), a genuine fear of exploitation (Boyer et al. ), and harmful, negative representation from genetic research have been highlighted (Garrison, Hudson, et al. ). Several reasonable solutions embedded within cultural context have been detailed previously in the literature, including co‐designed, co‐led, and research‐informed ethical frameworks (Caron et al. ). Studies have shown the need for explicit discussions with Indigenous communities that foster community‐engaged research to build genetic research capacity and genomic knowledge based on Indigenous partnerships (Hiratsuka et al. ). Researchers must continue to tackle the ethical, privacy‐related, and technical issues that impede the conduct of genomic analysis in order to provide the foundations of precision medicine within vulnerable and/or distinct population groups, thereby ensuring inclusivity in future clinical care opportunities (Pratt et al. ). To begin to understand the possible contributions of precision medicine approaches to Indigenous populations, it is necessary to review extant research in this space and determine the scope for pharmacogenomics and genomics research. Aims This systematic review aimed to synthesize global empirical evidence involving Indigenous populations for genomics research with a particular focus on pharmacogenomics. The overarching intent of this review is to provide insight regarding how research has been conducted, the role of Indigenous participants in such research, and the benefits or outcomes achieved for Indigenous participants and/or communities. Methodology The protocol for this systematic review was registered on the Prospero database (CRD42021257226) and was conducted and reported using the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) protocol for systematic reviews (Moher et al. ). 3.1 Search Strategy A systematic search was conducted using the following databases: PubMed, Medline, Embase, Cochrane, Scopus, CINAHL, and Web of Science, with search strategies being adjusted according to the requirements for each database. These searches were undertaken using keywords that related to the following themes: (i) Indigenous populations, (ii) pharmacogenomics, and (iii) precision medicine, with related terms and Medical Subject Headings. A pilot search was initially conducted, using a much longer list of both general and specific search terms to identify different Indigenous populations and possible key terms for inclusion. A test search of each term was performed by removing the term, re‐running the search, and comparing the results. If the removal of a given term did not alter the number of records returned, then the removed term was considered redundant and was not included in the final search string. Results were restricted to peer‐reviewed articles in the English language, and articles published between January 2010 and July 2022. The initial search was conducted in June 2021 and updated in July 2022. Letters, editorials, or opinion pieces were not included as part of this systematic review (See Appendix for full search strategy and MeSH terms). 3.2 Inclusion Criteria To be eligible for inclusion, studies needed to meet several criteria as described in Table . Any study design (i.e., cross‐sectional, longitudinal, survey, experimental, program evaluation, qualitative, or mixed methods) that intentionally included commentary or analysis of the outcomes of empirical Indigenous genomics was eligible for inclusion. Empirical research included any qualitative, quantitative, or mixed methods research studies. Qualitative studies included interviews, open‐ended surveys, participants' observations, or focus groups. Mixed method studies were only considered if data from the quantitative or qualitative components could be clearly extracted. Studies were only included if findings were analyzed, reported, or discussed separately. No literature or systematic reviews were included in the results of this systematic review. Commentaries, perspectives, letters, reviews, editorials or opinion pieces, or grey literature were excluded. 3.3 Screening Abstracts and titles were screened independently by two reviewers (BN and RV) to ensure the studies met the inclusion criteria. Any discrepancies regarding study eligibility were resolved through discussion with a third reviewer (KMR). All three reviewers (BN, RV, and KMR) discussed and agreed on all studies to be included in this review. Of the remaining studies, full‐text articles were then screened and analyzed for relevance and eligibility by two reviewers (BN and RV). 3.4 Data Extraction and Analysis Data extraction and content analyses for both quantitative and qualitative empirical studies were conducted independently by two reviewers (BN and RV). Data extracted included an overview of study characteristics describing the study population, methods, and key outcomes (Table ). Two reviewers (BN and KMR) independently assessed the quality of studies using questions adapted from published criteria on the quality assessment of interview, focus group, and survey studies using the mixed methods appraisal tool (MMAT) (Tong, Sainsbury, and Craig ). This tool has been specifically designed for systematic appraisal efforts for systematic mixed studies reviews that include qualitative, quantitative, and mixed methods studies. Scoring was based on 12 criteria distributed across the following domains: (i) description of aims and objectives, (ii) description of methods, (iii) participant selection, (iv) data collection, (v) data analysis, (vi) reporting, and (vii) engagement. Based on these criteria, studies were identified as being of good or poor quality (Table ). Search Strategy A systematic search was conducted using the following databases: PubMed, Medline, Embase, Cochrane, Scopus, CINAHL, and Web of Science, with search strategies being adjusted according to the requirements for each database. These searches were undertaken using keywords that related to the following themes: (i) Indigenous populations, (ii) pharmacogenomics, and (iii) precision medicine, with related terms and Medical Subject Headings. A pilot search was initially conducted, using a much longer list of both general and specific search terms to identify different Indigenous populations and possible key terms for inclusion. A test search of each term was performed by removing the term, re‐running the search, and comparing the results. If the removal of a given term did not alter the number of records returned, then the removed term was considered redundant and was not included in the final search string. Results were restricted to peer‐reviewed articles in the English language, and articles published between January 2010 and July 2022. The initial search was conducted in June 2021 and updated in July 2022. Letters, editorials, or opinion pieces were not included as part of this systematic review (See Appendix for full search strategy and MeSH terms). Inclusion Criteria To be eligible for inclusion, studies needed to meet several criteria as described in Table . Any study design (i.e., cross‐sectional, longitudinal, survey, experimental, program evaluation, qualitative, or mixed methods) that intentionally included commentary or analysis of the outcomes of empirical Indigenous genomics was eligible for inclusion. Empirical research included any qualitative, quantitative, or mixed methods research studies. Qualitative studies included interviews, open‐ended surveys, participants' observations, or focus groups. Mixed method studies were only considered if data from the quantitative or qualitative components could be clearly extracted. Studies were only included if findings were analyzed, reported, or discussed separately. No literature or systematic reviews were included in the results of this systematic review. Commentaries, perspectives, letters, reviews, editorials or opinion pieces, or grey literature were excluded. Screening Abstracts and titles were screened independently by two reviewers (BN and RV) to ensure the studies met the inclusion criteria. Any discrepancies regarding study eligibility were resolved through discussion with a third reviewer (KMR). All three reviewers (BN, RV, and KMR) discussed and agreed on all studies to be included in this review. Of the remaining studies, full‐text articles were then screened and analyzed for relevance and eligibility by two reviewers (BN and RV). Data Extraction and Analysis Data extraction and content analyses for both quantitative and qualitative empirical studies were conducted independently by two reviewers (BN and RV). Data extracted included an overview of study characteristics describing the study population, methods, and key outcomes (Table ). Two reviewers (BN and KMR) independently assessed the quality of studies using questions adapted from published criteria on the quality assessment of interview, focus group, and survey studies using the mixed methods appraisal tool (MMAT) (Tong, Sainsbury, and Craig ). This tool has been specifically designed for systematic appraisal efforts for systematic mixed studies reviews that include qualitative, quantitative, and mixed methods studies. Scoring was based on 12 criteria distributed across the following domains: (i) description of aims and objectives, (ii) description of methods, (iii) participant selection, (iv) data collection, (v) data analysis, (vi) reporting, and (vii) engagement. Based on these criteria, studies were identified as being of good or poor quality (Table ). Results Initial searches retrieved 427 articles (Figure ). After removing duplicates, 415 articles remained. Manual screening of titles and abstracts resulted in the selection of 77 articles that underwent full‐text screening and were reviewed for their eligibility and relevance to this systematic review. Full‐text screening resulted in a final 30 studies that were included in the final review (Table ). Of these, 16 were quantitative studies (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ; Nagar et al. ; Begnaud et al. ; de Carvalho et al. ; Zhernakova et al. ; Xhakaza et al. ; Ioannidis et al. ; Fernandes et al. ; Moreira et al. ; Farinango et al. ; O'Connell et al. ; Naranjo et al. ) while 14 were identified as qualitative (Hiratsuka et al. , ; Sahota ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Garrison, Barton, et al. ; Ridgeway et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ) or mixed methods studies (Ridgeway et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ). 4.1 Quality Assessment and Risk of Bias Analyses Quality assessment and risk of bias for all quantitative studies, qualitative and mixed methods studies selected were assessed using the MMAT (Table ). The MMAT has shown to have high validity (Hong et al. ) and reliability (Souto et al. ), and is a useful tool for appraising literature studies (Hong, Gonzalez‐Reyes, and Pluye ). All studies but one were of good quality and with low or moderate risk of bias ( n = 30). 4.2 Location and Populations Most studies were conducted with Native Indigenous populations from the United States of America ( n = 11) (Hiratsuka et al. ; Fohner et al. , ; Tanner et al. ; Begnaud et al. ; Moreira et al. ; Dirks et al. ; Garrison, Barton, et al. ; Shaw et al. ; Souto et al. ; Hong, Gonzalez‐Reyes, and Pluye ) with one study from among these including Latin American countries (Naranjo et al. ). There were five studies conducted with Indigenous Alaskan communities (Fohner et al. ; Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ), of which one specifically included Alaskan Anchorage populations (Fohner et al. ). There were three studies conducted with Indigenous Australians (Jaya Shankar et al. ; Cox et al. ; Nasir, Vinayagam, and Rae ), two with native Brazilians (de Carvalho et al. ; Fernandes et al. ), two with New Zealand Māori (Hudson et al. ; Beaton et al. ), and two with Indigenous South African populations (Xhakaza et al. ; O'Connell et al. ). Other studies involved Indigenous populations from Russia (Zhernakova et al. ), Hawai'i (Taualii et al. ), Canada (Morgan et al. ), Islands of Polynesia (Ioannidis et al. ), Ecuador (Farinango et al. ), and Colombia (Nagar et al. ). 4.3 Content Analysis Of the quantitative, case–control, and cohort studies ( n = 16, 53%), 11 (37%) specifically detailed obtaining approvals via ethical research committee and participant consent. (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ; de Carvalho et al. ; Zhernakova et al. ; Xhakaza et al. ; Fernandes et al. ; Farinango et al. ; O'Connell et al. ; Naranjo et al. ) Seeking approval from Indigenous review boards or tribal groups/leaders was only undertaken by four of the quantitative studies (13%) (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ). Studies undertaking qualitative or mixed method methodologies ( n = 14, 47%) included participation of Indigenous people mostly through research interviews (Garrison, Hudson, et al. ; Sahota ; Hudson et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 6, 20%) or focus groups (Hiratsuka et al. ; Taualii et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Rosas et al. ; Beans et al. ) ( n = 9, 30%). Genomics research involving Indigenous populations was divided into two distinct areas of research. Studies either aimed to explore genetic variations among Indigenous populations that were associated with disease (Cox et al. ; Begnaud et al. ; de Carvalho et al. ; Zhernakova et al. ; Ioannidis et al. ; Fernandes et al. ; Naranjo et al. ) ( n = 7, 23%), were important markers for drug metabolism (Jaya Shankar et al. ; Fohner et al. , ; Tanner et al. ; Nagar et al. ; Xhakaza et al. ; Moreira et al. ; Farinango et al. ; O'Connell et al. ) ( n = 9, 30%), or aimed to understand the perspectives of Indigenous populations regarding the conduct of genomics research (Garrison, Hudson, et al. ; Hiratsuka et al. ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ; Rosas et al. ; Beans et al. ) ( n = 14, 47%). A significant number of studies focused on investigating genetic variations that explored CYP gene variations (Jaya Shankar et al. ; Fohner et al. , ; Tanner et al. ; Naranjo et al. ) ( n = 5, 17%). In this systematic review, we aimed to explore the Indigenous perspective on how research was conducted, the role of Indigenous participants in that research, and the benefits or outcomes achieved for Indigenous participants and/or communities through genomics research. Outcomes described below revolve around these three main domains. 4.4 The Indigenous Perspective on Genomics Research Studies exploring the role of Indigenous participants in genomics research mostly focused on maintaining mutual research priorities and health needs (Garrison, Hudson, et al. ; Taualii et al. ; Hudson et al. ; Ridgeway et al. ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 7, 23%) at the forefront of all genomics research being conducted within Indigenous communities. Participants also highlighted the importance of incorporating Indigenous governance (Hiratsuka et al. ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Ridgeway et al. ; Nasir, Vinayagam, and Rae ) ( n = 6, 20%) and several specifically focused on control of data access and sharing (Sahota ; Hudson et al. ; Garrison, Barton, et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 5, 16%). Ensuring informed consent (Sahota ; Taualii et al. ; Hudson et al. ; Hiratsuka et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 6, 20%) and transparency across all research activities being conducted (Taualii et al. ; Hudson et al. ; Dirks et al. ; Hiratsuka et al. ) were also significant factors highlighted by participants. Community engagement (Hudson et al. ; Dirks et al. ; Beaton et al. ), necessary participant education (Taualii et al. ; Beans et al. ), and continuous communication with participants (Hudson et al. ; Dirks et al. ; Beans et al. ) were additional important considerations highlighted by participants in the studies reviewed. Other significant aspects included the need to establish mutual partnership (Hiratsuka et al. ; Nasir, Vinayagam, and Rae ), equal participation (Hiratsuka et al. ; Beaton et al. ), maintain trust (Hiratsuka et al. ; Morgan et al. ; Beaton et al. ), accountability (Hudson et al. ; Hiratsuka, Brown, and Dillard ), and reciprocity (Morgan et al. ). Ensuring culturally respectful research procedures (Dirks et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) and incorporating “cultural logic” (Sahota ) were also emphasized. A genuine concern regarding future unknown use or data sharing capabilities (Hiratsuka, Brown, and Dillard ; Rosas et al. ) and culturally suitable specimen disposal (Sahota ; Hiratsuka et al. ; Beaton et al. ) were priorities that emerged from the Indigenous populations involved in these studies. Studies also highlighted the continued worry and fear of potential discrimination or stigmatization (Morgan et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Rosas et al. ), distrust (Morgan et al. ; Beaton et al. ; Rosas et al. ; Nasir, Vinayagam, and Rae ), and the security or confidentiality of genetic information (Morgan et al. ; Hiratsuka, Brown, and Dillard ; Rosas et al. ). Cost and affordability were also raised as a concern for some participants (Shaw et al. ; Rosas et al. ). 4.5 The Role of Indigenous Participants in Genomics Research No study specifically explored the role of participants in genomics research involving Indigenous people or communities. Similarly, no studies specifically described Indigenous participant involvement in data collection or data analysis. However, a small number involved Indigenous participants in data interpretation (Morgan et al. ; Hiratsuka, Brown, and Dillard ) and used methodologies which were established by the Indigenous participants or with communities themselves (Jaya Shankar et al. ; Hiratsuka, Brown, and Dillard ). A significant number of studies acknowledged community members or participants within the study (Jaya Shankar et al. ; Garrison, Hudson, et al. ; Fohner et al. , ; Cox et al. ; Nagar et al. ; Xhakaza et al. ; Ioannidis et al. ; Sahota ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 15, 50%); however, no study included an Indigenous community member or participant specifically as a co‐author. 4.6 Benefits or Outcomes Achieved From Genomics Research in Indigenous Populations All studies identified in this review explored benefits or outcomes from the conduct of genomics research with Indigenous populations. Studies focused on genetic data analysis that contributed to identifying unique genetic variations and pharmacogenetics research (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ; Nagar et al. ; Begnaud et al. ; de Carvalho et al. ; Zhernakova et al. ; Xhakaza et al. ; Ioannidis et al. ; Fernandes et al. ; Moreira et al. ; Farinango et al. ; O'Connell et al. ; Naranjo et al. ) or engaged in understanding perspectives related to potential benefits or outcomes of genomics research (Hiratsuka et al. ; Sahota ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Garrison, Barton, et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ; Rosas et al. ; Beans et al. ). A focus of genomics research was to facilitate beneficial outcomes for the better optimization of drug therapy for Indigenous people (Fohner et al. , ; Nagar et al. ; Xhakaza et al. ; Moreira et al. ; O'Connell et al. ; Shaw et al. ; Beans et al. ; Henderson et al. ) ( n = 9, 30%). However, other studies also explored harmful outcomes and potential risks of genomics research that could result from the misuse of genomic information (Hiratsuka et al. ; Dirks et al. ; Ridgeway et al. ; Shaw et al. ; Rosas et al. ). Some articles explored perspectives on genomic biobanks (Hiratsuka, Brown, and Dillard ; Beaton et al. ) ( n = 4, 13%) and how understandings of specimen disposal (Sahota ) and security of confidentiality of genetic information (Rosas et al. ) can have beneficial genomics research outcomes for Indigenous communities. 4.7 Indigenous Engagement and Participation in Genomics Research Using a social‐ecological framework (Nutbeam, Harris, and Wise ) that considers levels based on the complex interplay between individual, relationship, community, and societal factors, we appraised Indigenous engagement and participation in genomics research (Table ). Also known as the socio‐ecological model (SEM), this conceptual framework is used for understanding the multiple levels of interrelated factors influencing health and health behaviours (Nutbeam, Harris, and Wise ). The SEM levels are commonly defined as the individual, interpersonal, community, organizational, and policy levels. The naming of these levels varies slightly, but the fundamental idea of this model is that public health efforts need to use a combination of interventions at all levels and across society. (Nutbeam, Harris, and Wise ; Schölmerich and Kawachi ) This model can also be applied to evaluate what elements of precision medicine/genomics research work for whom, why, and how in an Indigenous context. Furthermore, the SEM approach holds great potential for complementing the life‐course perspective to reducing existing disparities in health outcomes from birth for Indigenous populations (Schölmerich and Kawachi ). Key outcomes from the systematic review have identified that there are enablers and challenges across the SEM levels, with the following themes: perceptions and ownership of research (individual and collective); accountability and safeguards; meaningful partnerships between researchers and communities; Indigenous leadership and governance; capacity building and sharing with communities; autonomy and consent over research processes (e.g., data management), and health and socioeconomic inequities and recognition of Indigenous ways of being, doing, and knowing. Promoters and barriers for each of these identified themes are outlined in Table . Some papers were particularly strong, with enablers that were important in acknowledging Indigenous protocols and the respectful conduct of research with Indigenous communities (Hudson et al. ; Morgan et al. ; Beaton et al. ). Key informants in these studies spoke about the need to protect Indigenous, specifically Māori interests through Māori control, which promoted concepts of power‐sharing over benefit‐sharing (Sahota ; Hudson et al. ). (Beans et al. ) Across studies, individuals valued clear and ongoing communication, particularly in the context of past experiences involving a lack of knowledge transfer, both across clinical and research‐based settings. (Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ) This was a barrier in these studies which particularly highlighted the subsequent inaccessibility of gained knowledge (Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ). These safeguards about data sharing and data management (Taualii et al. ; Hudson et al. ; Garrison, Barton, et al. ; Beaton et al. ) were also highlighted as an important inclusion, with robust policies regarding data confidentiality, privacy, and the promotion of informed consent as major considerations (Hudson et al. ; Morgan et al. ) and omissions in some studies (Garrison, Barton, et al. ; Rosas et al. ). Quality Assessment and Risk of Bias Analyses Quality assessment and risk of bias for all quantitative studies, qualitative and mixed methods studies selected were assessed using the MMAT (Table ). The MMAT has shown to have high validity (Hong et al. ) and reliability (Souto et al. ), and is a useful tool for appraising literature studies (Hong, Gonzalez‐Reyes, and Pluye ). All studies but one were of good quality and with low or moderate risk of bias ( n = 30). Location and Populations Most studies were conducted with Native Indigenous populations from the United States of America ( n = 11) (Hiratsuka et al. ; Fohner et al. , ; Tanner et al. ; Begnaud et al. ; Moreira et al. ; Dirks et al. ; Garrison, Barton, et al. ; Shaw et al. ; Souto et al. ; Hong, Gonzalez‐Reyes, and Pluye ) with one study from among these including Latin American countries (Naranjo et al. ). There were five studies conducted with Indigenous Alaskan communities (Fohner et al. ; Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ), of which one specifically included Alaskan Anchorage populations (Fohner et al. ). There were three studies conducted with Indigenous Australians (Jaya Shankar et al. ; Cox et al. ; Nasir, Vinayagam, and Rae ), two with native Brazilians (de Carvalho et al. ; Fernandes et al. ), two with New Zealand Māori (Hudson et al. ; Beaton et al. ), and two with Indigenous South African populations (Xhakaza et al. ; O'Connell et al. ). Other studies involved Indigenous populations from Russia (Zhernakova et al. ), Hawai'i (Taualii et al. ), Canada (Morgan et al. ), Islands of Polynesia (Ioannidis et al. ), Ecuador (Farinango et al. ), and Colombia (Nagar et al. ). Content Analysis Of the quantitative, case–control, and cohort studies ( n = 16, 53%), 11 (37%) specifically detailed obtaining approvals via ethical research committee and participant consent. (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ; de Carvalho et al. ; Zhernakova et al. ; Xhakaza et al. ; Fernandes et al. ; Farinango et al. ; O'Connell et al. ; Naranjo et al. ) Seeking approval from Indigenous review boards or tribal groups/leaders was only undertaken by four of the quantitative studies (13%) (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ). Studies undertaking qualitative or mixed method methodologies ( n = 14, 47%) included participation of Indigenous people mostly through research interviews (Garrison, Hudson, et al. ; Sahota ; Hudson et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 6, 20%) or focus groups (Hiratsuka et al. ; Taualii et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Rosas et al. ; Beans et al. ) ( n = 9, 30%). Genomics research involving Indigenous populations was divided into two distinct areas of research. Studies either aimed to explore genetic variations among Indigenous populations that were associated with disease (Cox et al. ; Begnaud et al. ; de Carvalho et al. ; Zhernakova et al. ; Ioannidis et al. ; Fernandes et al. ; Naranjo et al. ) ( n = 7, 23%), were important markers for drug metabolism (Jaya Shankar et al. ; Fohner et al. , ; Tanner et al. ; Nagar et al. ; Xhakaza et al. ; Moreira et al. ; Farinango et al. ; O'Connell et al. ) ( n = 9, 30%), or aimed to understand the perspectives of Indigenous populations regarding the conduct of genomics research (Garrison, Hudson, et al. ; Hiratsuka et al. ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ; Rosas et al. ; Beans et al. ) ( n = 14, 47%). A significant number of studies focused on investigating genetic variations that explored CYP gene variations (Jaya Shankar et al. ; Fohner et al. , ; Tanner et al. ; Naranjo et al. ) ( n = 5, 17%). In this systematic review, we aimed to explore the Indigenous perspective on how research was conducted, the role of Indigenous participants in that research, and the benefits or outcomes achieved for Indigenous participants and/or communities through genomics research. Outcomes described below revolve around these three main domains. The Indigenous Perspective on Genomics Research Studies exploring the role of Indigenous participants in genomics research mostly focused on maintaining mutual research priorities and health needs (Garrison, Hudson, et al. ; Taualii et al. ; Hudson et al. ; Ridgeway et al. ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 7, 23%) at the forefront of all genomics research being conducted within Indigenous communities. Participants also highlighted the importance of incorporating Indigenous governance (Hiratsuka et al. ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Ridgeway et al. ; Nasir, Vinayagam, and Rae ) ( n = 6, 20%) and several specifically focused on control of data access and sharing (Sahota ; Hudson et al. ; Garrison, Barton, et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 5, 16%). Ensuring informed consent (Sahota ; Taualii et al. ; Hudson et al. ; Hiratsuka et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 6, 20%) and transparency across all research activities being conducted (Taualii et al. ; Hudson et al. ; Dirks et al. ; Hiratsuka et al. ) were also significant factors highlighted by participants. Community engagement (Hudson et al. ; Dirks et al. ; Beaton et al. ), necessary participant education (Taualii et al. ; Beans et al. ), and continuous communication with participants (Hudson et al. ; Dirks et al. ; Beans et al. ) were additional important considerations highlighted by participants in the studies reviewed. Other significant aspects included the need to establish mutual partnership (Hiratsuka et al. ; Nasir, Vinayagam, and Rae ), equal participation (Hiratsuka et al. ; Beaton et al. ), maintain trust (Hiratsuka et al. ; Morgan et al. ; Beaton et al. ), accountability (Hudson et al. ; Hiratsuka, Brown, and Dillard ), and reciprocity (Morgan et al. ). Ensuring culturally respectful research procedures (Dirks et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) and incorporating “cultural logic” (Sahota ) were also emphasized. A genuine concern regarding future unknown use or data sharing capabilities (Hiratsuka, Brown, and Dillard ; Rosas et al. ) and culturally suitable specimen disposal (Sahota ; Hiratsuka et al. ; Beaton et al. ) were priorities that emerged from the Indigenous populations involved in these studies. Studies also highlighted the continued worry and fear of potential discrimination or stigmatization (Morgan et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Rosas et al. ), distrust (Morgan et al. ; Beaton et al. ; Rosas et al. ; Nasir, Vinayagam, and Rae ), and the security or confidentiality of genetic information (Morgan et al. ; Hiratsuka, Brown, and Dillard ; Rosas et al. ). Cost and affordability were also raised as a concern for some participants (Shaw et al. ; Rosas et al. ). The Role of Indigenous Participants in Genomics Research No study specifically explored the role of participants in genomics research involving Indigenous people or communities. Similarly, no studies specifically described Indigenous participant involvement in data collection or data analysis. However, a small number involved Indigenous participants in data interpretation (Morgan et al. ; Hiratsuka, Brown, and Dillard ) and used methodologies which were established by the Indigenous participants or with communities themselves (Jaya Shankar et al. ; Hiratsuka, Brown, and Dillard ). A significant number of studies acknowledged community members or participants within the study (Jaya Shankar et al. ; Garrison, Hudson, et al. ; Fohner et al. , ; Cox et al. ; Nagar et al. ; Xhakaza et al. ; Ioannidis et al. ; Sahota ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Ridgeway et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ) ( n = 15, 50%); however, no study included an Indigenous community member or participant specifically as a co‐author. Benefits or Outcomes Achieved From Genomics Research in Indigenous Populations All studies identified in this review explored benefits or outcomes from the conduct of genomics research with Indigenous populations. Studies focused on genetic data analysis that contributed to identifying unique genetic variations and pharmacogenetics research (Jaya Shankar et al. ; Fohner et al. , ; Cox et al. ; Tanner et al. ; Nagar et al. ; Begnaud et al. ; de Carvalho et al. ; Zhernakova et al. ; Xhakaza et al. ; Ioannidis et al. ; Fernandes et al. ; Moreira et al. ; Farinango et al. ; O'Connell et al. ; Naranjo et al. ) or engaged in understanding perspectives related to potential benefits or outcomes of genomics research (Hiratsuka et al. ; Sahota ; Taualii et al. ; Hudson et al. ; Morgan et al. ; Dirks et al. ; Garrison, Barton, et al. ; Ridgeway et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ; Shaw et al. ; Nasir, Vinayagam, and Rae ; Beaton et al. ; Rosas et al. ; Beans et al. ). A focus of genomics research was to facilitate beneficial outcomes for the better optimization of drug therapy for Indigenous people (Fohner et al. , ; Nagar et al. ; Xhakaza et al. ; Moreira et al. ; O'Connell et al. ; Shaw et al. ; Beans et al. ; Henderson et al. ) ( n = 9, 30%). However, other studies also explored harmful outcomes and potential risks of genomics research that could result from the misuse of genomic information (Hiratsuka et al. ; Dirks et al. ; Ridgeway et al. ; Shaw et al. ; Rosas et al. ). Some articles explored perspectives on genomic biobanks (Hiratsuka, Brown, and Dillard ; Beaton et al. ) ( n = 4, 13%) and how understandings of specimen disposal (Sahota ) and security of confidentiality of genetic information (Rosas et al. ) can have beneficial genomics research outcomes for Indigenous communities. Indigenous Engagement and Participation in Genomics Research Using a social‐ecological framework (Nutbeam, Harris, and Wise ) that considers levels based on the complex interplay between individual, relationship, community, and societal factors, we appraised Indigenous engagement and participation in genomics research (Table ). Also known as the socio‐ecological model (SEM), this conceptual framework is used for understanding the multiple levels of interrelated factors influencing health and health behaviours (Nutbeam, Harris, and Wise ). The SEM levels are commonly defined as the individual, interpersonal, community, organizational, and policy levels. The naming of these levels varies slightly, but the fundamental idea of this model is that public health efforts need to use a combination of interventions at all levels and across society. (Nutbeam, Harris, and Wise ; Schölmerich and Kawachi ) This model can also be applied to evaluate what elements of precision medicine/genomics research work for whom, why, and how in an Indigenous context. Furthermore, the SEM approach holds great potential for complementing the life‐course perspective to reducing existing disparities in health outcomes from birth for Indigenous populations (Schölmerich and Kawachi ). Key outcomes from the systematic review have identified that there are enablers and challenges across the SEM levels, with the following themes: perceptions and ownership of research (individual and collective); accountability and safeguards; meaningful partnerships between researchers and communities; Indigenous leadership and governance; capacity building and sharing with communities; autonomy and consent over research processes (e.g., data management), and health and socioeconomic inequities and recognition of Indigenous ways of being, doing, and knowing. Promoters and barriers for each of these identified themes are outlined in Table . Some papers were particularly strong, with enablers that were important in acknowledging Indigenous protocols and the respectful conduct of research with Indigenous communities (Hudson et al. ; Morgan et al. ; Beaton et al. ). Key informants in these studies spoke about the need to protect Indigenous, specifically Māori interests through Māori control, which promoted concepts of power‐sharing over benefit‐sharing (Sahota ; Hudson et al. ). (Beans et al. ) Across studies, individuals valued clear and ongoing communication, particularly in the context of past experiences involving a lack of knowledge transfer, both across clinical and research‐based settings. (Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ) This was a barrier in these studies which particularly highlighted the subsequent inaccessibility of gained knowledge (Dirks et al. ; Hiratsuka et al. ; Hiratsuka, Brown, and Dillard ). These safeguards about data sharing and data management (Taualii et al. ; Hudson et al. ; Garrison, Barton, et al. ; Beaton et al. ) were also highlighted as an important inclusion, with robust policies regarding data confidentiality, privacy, and the promotion of informed consent as major considerations (Hudson et al. ; Morgan et al. ) and omissions in some studies (Garrison, Barton, et al. ; Rosas et al. ). Discussion This review aimed to explore and describe genomics research being conducted with Indigenous participants globally. The findings reported in this review have identified a relatively small number of studies, highlighting a growing gap in Indigenous genomics research. Quantitative research studies had a strong focus on the understanding of a variety of drug metabolism concerns including nicotine metabolism (Tanner et al. ), Phase 1 drug metabolism pathways (Fohner et al. , ; Naranjo et al. ), response to metformin (Xhakaza et al. ), and antiretroviral metabolism (O'Connell et al. ). However, there were also several other studies that were interested in genetic comparisons between populations including Antiquo and Choco communities (Nagar et al. ), Russian communities (Zhernakova et al. ), and groups in Latin America (Moreira et al. ). Inflammation was the focus of one study that considered polymorphisms in cytokine genes (Cox et al. ) and two further studies focused on specific illnesses, including lung adenocarcinoma (Begnaud et al. ) and acute lymphoblastic leukemia (de Carvalho et al. ). Qualitative and mixed methods studies explored Indigenous perspectives regarding genomics research conducted with Indigenous participants. No study specifically indicated the involvement of Indigenous participants, communities, or researchers as part of the initial study design, data collection or analysis, although a few reported incorporating participant feedback on the final outcomes. Studies acknowledge Indigenous involvement; however, a specific description of being Indigenous‐led or authored is limited. Harnessing genomic advancement for Indigenous communities to improve health and well‐being is not only a priority, but an essential way forward to enable Indigenous‐focused medical and health research. This review highlights that the considerations for those conducting genomics research with Indigenous communities are many and varied and can be assessed across different SEM levels. The study team considered objectives for research with Indigenous communities needed to provide evidence of understanding of the Indigenous perspectives, the role of Indigenous participants, and the benefits of the achieved outcomes. Table highlights which studies helped or hindered efforts to achieve these objectives. This was similarly found in the literature, with overall findings pointing to increased Indigenous control of research processes with a commitment to inclusion, reciprocity, and increasing opportunities for research excellence (Ewen, Ryan, and Platania‐Phung ; Australian Institute of Aboriginal and Torres Strait Islander Studies ). Ensuring Indigenous communities are included in the design, implementation, analysis, and outcomes of genomics research is an important aspect that needs to be given more attention. Guidelines to implement ethical research and best practices within research involving Indigenous participants and communities that value, prioritize, and empower Indigenous traditional knowledge and equal participation have been developed for various populations across the globe (National Health and Medical Research Council (Australia) ; Australian Institute of Aboriginal and Torres Strait Islander Studies ; Hudson et al. ). However, without the active participation and governance of Indigenous consumers during all stages of research, the potential to disempower and misinterpret research outcomes can arise (MacLean et al. ; Drawson, Toombs, and Mushquash ). This review also identified the ongoing concern regarding the potential for discrimination, and the lack of trust, reciprocity, and transparent partnerships or engagement that continues to exist despite efforts to alleviate historical wrongdoing. Indigenous academic involvement and authorship inclusion is also important; this review highlights the limited acknowledgment and the inability to properly know if the authors were Indigenous. Acknowledgment and recognition of Indigenous authorship may warrant further review and consideration within Indigenous research. Developing new models of leadership and governance and enabling functional transparent, best‐practice standards and operating protocols are a requirement to establish effective community engagement, informed consent, management, and use of biological samples and data for the ethical conduct and management of Indigenous genomic research projects. Researchers must be responsive to these needs and ensure that they produce high‐impact research evidence from which health benefits can follow. Policymakers and practitioners who utilize genomics research evidence need to understand with confidence the outcomes of their actions to make informed decisions that have the potential to result in positive health impact (McCalman et al. ) for Indigenous people. Ultimately, little progress can be made in the field of Indigenous genomics without specific attention to and investment in ensuring Indigenous leadership and control and ownership over genomic research, with a commitment to prioritizing genomics healthcare to enable the well‐being and better health of Indigenous people. Findings of a narrative review on capacity building of Indigenous health researchers reported the need for improvements in collaborative research between Indigenous and non‐Indigenous researchers and organizations. This review discussed the need for a shift from a deficit‐ to strength‐based education and research focus with respect to both participation and quality, Indigenous health researchers to lead or co‐lead projects due to their commitment in research that makes a meaningful contribution to community wellbeing, and highlighted the importance of strengthening the research capabilities of community members (Ewen, Ryan, and Platania‐Phung ). Research capacity building is imperative to achieve gains in workforce development, improving health systems, and undertaking research studies with Indigenous communities. 5.1 Strengths and Limitations A strength of this review is the inclusion of quantitative and qualitative papers in data extraction efforts and the assessment of all studies to systematically examine Indigenous community involvement by using a socio‐ecological framework. This approach highlighted that only a small number of studies sought approval from Indigenous review boards or tribal groups/leaders, providing additional insights into areas of need when designing pharmocogenomic research efforts in partnership with Indigenous communities. While this review included studies spanning a broad range of global regions, with the limited involvement of Indigenous researchers and organizations, the contextualization and interpretation of results may be lacking. Notably, this work is not representative of all global Indigenous populations due to the diversity of these communities and the dearth of literature from some regions. Strengths and Limitations A strength of this review is the inclusion of quantitative and qualitative papers in data extraction efforts and the assessment of all studies to systematically examine Indigenous community involvement by using a socio‐ecological framework. This approach highlighted that only a small number of studies sought approval from Indigenous review boards or tribal groups/leaders, providing additional insights into areas of need when designing pharmocogenomic research efforts in partnership with Indigenous communities. While this review included studies spanning a broad range of global regions, with the limited involvement of Indigenous researchers and organizations, the contextualization and interpretation of results may be lacking. Notably, this work is not representative of all global Indigenous populations due to the diversity of these communities and the dearth of literature from some regions. B.F.N. contributed to the study conceptualization and design, conducting the review, data extraction, analysis, and appraisal, interpretation of the data, drafting the manuscript and revising it critically, and obtained the final approval of the version to be published. R.V. contributed to the data extraction, analysis and appraisal, interpretation of the data, drafting the manuscript and revising it critically, and obtained the final approval of the version to be published. L.M. contributed to the writing of the qualitative analysis, editing of the qualitative data table, and made additions to the manuscript and gave a final approval of the version to be published. M.T. contributed to the study conceptualization and design and revising the manuscript critically and gave a final approval of the version to be published. S.H.N. contributed to the study design and revising the manuscript critically and gave a final approval of the version to be published. K.M.R. contributed to the study design, data extraction, analysis and appraisal, interpretation of the data, and revising the manuscript critically, and gave a final approval of the version to be published. The authors declare no conflicts of interest. |
Behavioral interventions in acute COVID-19 recovery: A new opportunity for integrated care | 29de1dcb-4f7b-4369-b272-aa2d8bec769a | 7340026 | Physiology[mh] | The authors have no conflicts of interest to declare.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.